Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I honestly have no idea what you mean.

He's talking about Quartz user space vs device space seperation, how the point system exposed to applications via the APIs is mapped to the pixel system on the display or the dot system on your printer.

By default, the Retina display implements a scaling factor of 2, resulting in your points being mapped as 2x2 grids of pixels. Thus, your context's 1440x900 points are mapped unto 2880x1800 pixels on the device.

This hack simply reverts the scaling back to 1, making the context require 2880x1800 points to map onto the screen on the device.

This has been explained thoroughly by Gnasher since page 10. I provided links to the appropriate documentation when needed.
 
That's simply not true sorry.

With all due respect sir, you are absolutely into the deepest weeds here.

The matter of where the scaler is - in the monitor itself or in the host computer is irrelevant. For a laptop in particular belaboring this point is of no value whatsoever since the display device is built into the computer.

The point is, if you are not rendering to native resolution, then you are going to have to scale the image at sometime somewhere before the user sees it. It doesn't matter where that happens - it is going to cost time to do, and it is going to have the potential to introduce artifacts.

It is that simple.
 
Where in Windows does it let you set your display to 2880x1800?
resolutions2.png
 
I can't understand your point or even if you are trying to make one.

I'm referring to this thing you said:

The thing is, in this case, you are locked into fonts and other graphical elements rendering to a pixel doubled virtual resolution of 1440x900.

Rendering is done at the native resolution, nothing is doubled unless there is not enough information (low dpi) to render it at the same size.

Most other graphical elements are rectangular in shape and a rectangle can also be scaled easily.

Rectangular in shape but, in the case of images it depend on how much information that is in that rectangle.

I honestly have no idea what you mean.

I'm referring to developer documentation on how Quartz works on OS X, you can also check WWDC sessions about how to develop for resolution independence (if you have dev account).
 
With all due respect sir, you are absolutely into the deepest weeds here.

The matter of where the scaler is - in the monitor itself or in the host computer is irrelevant. For a laptop in particular belaboring this point is of no value whatsoever since the display device is built into the computer.

How is it irrelevant ? It explains everything that was said in response to the initial query that spawned the conversation about it. The scaler in a laptop is still in the monitor's logic. The host processor (CPU) or its graphics processor (GPU) are not involved in a resolution "switch". The OS produces a frame buffer and sends it to the GPU, which in turn provides the proper DVI/VGA/DP signal to set the monitor to the appropriate resolution. The frame buffer is then sent intact.

The monitor has to either do scaling or letterboxing, which is independant of the host. I don't understand what you're not getting about this ? :confused:

The point is, if you are not rendering to native resolution, then you are going to have to scale the image at sometime somewhere before the user sees it. It doesn't matter where that happens - it is going to cost time to do, and it is going to have the potential to introduce artifacts.

It is that simple.

Yes, but it has no impact on the performance of the host system if done by the monitor's logic. The host system's ressources are free 100%, which was my point, and the question asked when I made it. You're not countering my point, you're just trying to... heck, I have no idea what point you're trying to make.

IE : If you run your game at 1280x800, you get 45 FPS let's say. Running the same game at 1440x900 results in 40 FPS because the GPU/CPU need to calculate more pixels.

Now hook up this computer to to a 1280x800 monitor running at 1280x800 resolution, you get 45 FPS. Hook up the same computer to a 1440x900 monitor, but still set the resolution to 1280x800, and you get the same 45 FPS. That's the point, the resolution on the monitor does not impact system performance when not running at native when you "switch" resolution since any letterboxing/scaling is not done by the host.
 
Great. Now if you can give me a scale of 1 and a scale of 2, then give me a slider to choose any value between 1 and 2. This was the entire point of my post.

Instead what Apple gave us was the ability to render to a limited number of discrete resolutions, and then scale the entire resulting frame to the display's native resolution.

He's talking about Quartz user space vs device space seperation, how the point system exposed to applications via the APIs is mapped to the pixel system on the display or the dot system on your printer.

By default, the Retina display implements a scaling factor of 2, resulting in your points being mapped as 2x2 grids of pixels. Thus, your context's 1440x900 points are mapped unto 2880x1800 pixels on the device.

This hack simply reverts the scaling back to 1, making the context require 2880x1800 points to map onto the screen on the device.

This has been explained thoroughly by Gnasher since page 10. I provided links to the appropriate documentation when needed.


----------

How is it irrelevant ?

Because we are talking about a process - scaling - that is very far abstracted form the architecture of a particular computer system. The point remains that this process must happen somewhere, and again it will take time, and it will have side effects. That's all that is important to the discussion.
 
Great. Now if you can give me a scale of 1 and a scale of 2, then give me a slider to choose any value between 1 and 2. This was the entire point of my post.

Why then did you reply to me, I wasn't even talking about ? :confused:

There would probably be scenarios in which certain scaling values between 1 and 2 wouldn't work out so well mapping back onto 2880x1800, thus that is why Apple chose to do it the way they did.

Frankly, I don't really care about that, I don't think it's important.

----------

Because we are talking about a process - scaling - that is very far abstracted form the architecture of a particular computer system. The point remains that this process must happen somewhere, and again it will take time, and it will have side effects. That's all that is important to the discussion.

The point is you missed the point when replying to my post about performance impact of scaling for a LCDs non-native resolution, which has nothing to do with what you're discussing. Next time, just don't quote me if you're not going to directly address what I've been saying, you've made me waste my time trying to understand how your point fit with my initial statements.

You don't get to decide what is and isn't important to discuss in this topic sorry.
 
It seems you don't know how the Retina display and MacOS X work. iPad 3 = 2048 x 1536 in a 9.7" screen, and everybody loves it, and their eyes love it, too.

The iPad 3 isn't comparable to this story though, is it? The iPad UI has been scaled up to work at that resolution and obviously it looks a lot better than the previous two Gens because of the pixel density.

Try running OS X natively at 2800 x 1800 inside 15" and then come back to me. The UI on the 27" is too small as it is and that's only 2560x1440 pixels, inside a physical space that is 12" bigger diagonally.

I'm not saying the 2880 x 1800 Retina display (which is 1440 x 900 workable space) is too small. I'm saying that flat out 2880 x 1800 workable space will be far too small for that physical area.
 
Great. Now if you can give me a scale of 1 and a scale of 2, then give me a slider to choose any value between 1 and 2. This was the entire point of my post.

Why? This system is harder for EVERYONE to implement than what Apple actually did, and often produces worse results with more bugs.

Instead what Apple gave us was the ability to render to a limited number of discrete resolutions, and then scale the entire resulting frame to the display's native resolution.

Which is a better solution, both for developers (Apple and third-party) and for end users (looks better, works better with fewer bugs).
 
The iPad 3 isn't comparable to this story though, is it? The iPad UI has been scaled up to work at that resolution and obviously it looks a lot better than the previous two Gens because of the pixel density.

Try running OS X natively at 2800 x 1800 inside 15" and then come back to me. The UI on the 27" is too small as it is and that's only 2560x1440 pixels, inside a physical space that is 12" bigger diagonally.

Uh ? The 27" ACD is 110 PPI, the UI is bigger on it than on a 11" MBA which is running at 135 PPI.

In fact, the 27" ACD is equivalent to 1440x900 on a 15.4" MBP. It's just huge. The only reason it's acceptable on the 27" is that you usually sit farther away from it, making the pixels smaller (1 inch at 36" from your eye appears smaller than 1 inch at 24" from your eye).
 
I'm referring to this thing you said:

Rendering is done at the native resolution, nothing is doubled unless there is not enough information (low dpi) to render it at the same size.

And that's not relevant to the point I was making to begin with. The issue is not with the quality of the pixel-doubled output. The issue is that it is doubled at all. In principal, there is no reason why it must be so. Instead it could be multiplied by any factor between 1 and 2, and this factor could be user configurable.

Rectangular in shape but, in the case of images it depend on how much information that is in that rectangle.

Certainly. And again scaling an image by 2x on each dimension (if and when that may take place) is no less destructive than doing so by some other factor between one and two. Well, strictly that may not be the case depending on the scaling algorithm. But certainly applying such an algorithm on an element by element basis cannot be worse than punting and applying the scaling to the entire screen buffer.

I'm referring to developer documentation on how Quartz works on OS X, you can also check WWDC sessions about how to develop for resolution independence (if you have dev account).

I will check it out. I am basing my information on what was reported by Anandtech.
 
The issue is not with the quality of the pixel-doubled output. The issue is that it is doubled at all. In principal, there is no reason why it must be so. Instead it could be multiplied by any factor between 1 and 2, and this factor could be user configurable.

If you scale something to x2 then everything falls nicely within an existing pixel, if you scale to say, 1.33345 then some quantization will happen since that exact position may not exist.
 
Why then did you reply to me, I wasn't even talking about ? :confused:

What?

Guy the first exchange between us in this thread was when you replied to my post and said "This is simply not true...".

You were wrong, so I replied to address your misunderstanding.

You don't get to decide what is and isn't important to discuss in this topic sorry.

You really ougt to lighten up. You posted incorrect and misleading information and I replied to correct you. The fact remains that the exact place on the circuit board inside the computer that the scaling operation is performed is not relevant. We are talking about a process which by definition has a time and other costs associated with it.
 
You really ougt to lighten up. You posted incorrect and misleading information and I replied to correct you. The fact remains that the exact place on the circuit board inside the computer that the scaling operation is performed is not relevant. We are talking about a process which by definition has a time and other costs associated with it.

It's relevant in terms of quality—in-monitor scaling on LCDs has always been of lower quality than the scaling possible on the GPU.
 
This is why we still need a 17" retina display mac.

A macbook pro 17" with retina display would lend more practicality to higher resolutions and make possible a real desktop replacement laptop. This I think with better marketing would increase sales of 17" laptops, by marketing them as desktop replacements. Buying a smaller form factor laptop with the intention of using higher resolutions is as impractical as working through a periscope. More display area is needed for higher resolutions. A retina iMac would also be fantastic, but remains a technical feat to create an affordable 27" retina display with acceptable yields and furnishing the computing power required to drive it.
 
If you scale something to x2 then everything falls nicely within an existing pixel, if you scale to say, 1.33345 then some quantization will happen since that exact position may not exist.

Alright, let's go with that for a moment.

So then, when you select one of the scaled resolutions offered in the control panel, such as 1920x1200, you are applying this process to the entire pixel doubled 3840x2400 frame buffer in order to display it at 2880x1800.

This was the very point of my original post.

----------

It's relevant in terms of quality—in-monitor scaling on LCDs has always been of lower quality than the scaling possible on the GPU.

But the entire point of my argument is that if you do no scaling at all - i.e. render to the native resolution of the display to begin with - then that will be superior no matter what quality the scaler is.

I spelled that out rather clearly, so it is a bit surprising that we are spending so much time discussing it.
 
You really ougt to lighten up. You posted incorrect and misleading information and I replied to correct you. The fact remains that the exact place on the circuit board inside the computer that the scaling operation is performed is not relevant.

How was I incorrect in anything I posted (LCD monitor logic vs host GPU/CPU utilisation, Quartz scaling factors, points vs pixel coordinate systems) ?

And yes, the circuit board where the scaling is performance is relevant. Depending on where you're off loading the scaling to, you'll either have a host performance impact or a display quality impact.

It is 100% relevant to understand the differences on where in the display chain the scaling is taking place.

You still don't seem to understand the difference between using the monitor's scaler vs a software scaler or a GPU assisted scaler.
 
But the entire point of my argument is that if you do no scaling at all - i.e. render to the native resolution of the display to begin with - then that will be superior no matter what quality the scaler is.

I spelled that out rather clearly, so it is a bit surprising that we are spending so much time discussing it.

Which the machine does, at the "Best for Retina" mode. However, in order to shrink stuff (to gain more desktop area), you're going to have to scale somewhere—whether that is done via an arbitrary scale factor for individual UI elements (what Apple was working on before HiDPI was inspired by the Retina modes in iOS), scaling the entire interface up or down post-compositing (what they're doing now), or scaling an input signal to the monitor, scaling will be done. And there's nothing about that first option that would make it any better quality than what Apple actually did, unless they hinted individual elements, which is still a form of distortion.

In any environment where something has to be quantized, there will be deviations from whatever Platonic ideal exists in one's head. ;)
 
Alright, let's go with that for a moment.

So then, when you select one of the scaled resolutions offered in the control panel, such as 1920x1200, you are applying this process to the entire pixel doubled 3840x2400 frame buffer in order to display it at 2880x1800.

This was the very point of my original post.

That is the hypothesis that anandtech is going with, in reality we don't really know any implementation details of quartz.

But the entire point of my argument is that if you do no scaling at all - i.e. render to the native resolution of the display to begin with - then that will be superior no matter what quality the scaler is.

It is rendered at native resolution, high DPI details will appear larger if they are displayed 1:1 on a lower res display.
 
Which the machine does, at the "Best for Retina" mode.

Actually... that's not quite what the "machine" does. ;) Read the docs and gnasher's post about Quartz on page 10. Regardless of Retina or not, Quartz has to apply the CTM (Current Transformation Matrix) to the drawing context to map it from user space to device space. There will always been intermediate steps and operations unless you're writing directly to frame buffer memory on the GPU (which no modern OS allows anyhow).

----------

That is the hypothesis that anandtech is going with, in reality we don't really know any implementation details of quartz.

Again, page 10. Apple has written documentation on Quartz that explains very clearly what is happening.
 
How was I incorrect in anything I posted.

You were incorrect in identifying my post as such. You proceded to craft a retort in response to what you perceived (incorrectly) from my post. The fact is - and you can go back and plainly see - that I never once made any claim about what part of the hardware does the scaling. I only identified that it has to happen somewhere, and that there is a cost associated with it.
 
Actually... that's not quite what the "machine" does. ;) Read the docs and gnasher's post about Quartz on page 10. Regardless of Retina or not, Quartz has to apply the CTM (Current Transformation Matrix) to the drawing context to map it from user space to device space. There will always been intermediate steps and operations unless you're writing directly to frame buffer memory on the GPU (which no modern OS allows anyhow).

Right, I get that, but my point is that on a Retina screen at the "Best for Retina" mode, 2x UI elements will be drawn in a way that maps 1:1 of their source pixels to the pixels of the display. That's all I was trying to say.
 
Actually... that's not quite what the "machine" does. ;) Read the docs and gnasher's post about Quartz on page 10. Regardless of Retina or not, Quartz has to apply the CTM (Current Transformation Matrix) to the drawing context to map it from user space to device space. There will always been intermediate steps and operations unless you're writing directly to frame buffer memory on the GPU (which no modern OS allows anyhow).


Fair enough.

And any good scaling algorithm will pass the input through untouched when the scaling factor is 1.
 
Fair enough.

And any good scaling algorithm will pass the input through untouched when the scaling factor is 1.

But since the CTM is for scaling, rotation and translations, the system will apply it regardless of any scaling factors. Unless Apple has some logic in there to see that points map 1:1 unto device dependant pixels/dot without any of the 3 possible transformations being applied.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.