Oh, I was confused as Caramel police said the OS is not running at 1400x900 and blowing everything up.
Well, caramelpolice is not wrong and what they wrote is in fact technically accurate. If there is any confusion, its because how we are habituated to think about resolutions and pixels (as I mentioned above).
The thing is, the OS is indeed running a 1440x900 resolution. Its easy to check — open up a new Swift playground in Xcode and use the following code to query the dimensions of the screen:
Code:
import Cocoa
let main_screen = NSScreen.main!
main_screen.frame.width
main_screen.frame.height
Mine says 1680x1050 (I am running a scaled retina mode).
And that is what your software sees — it genuinely believes to be running this resolution, just as if you were using a non-retina mode. This is what is known as
logical pixels or points. At the same time, everything is drawn of course at the full 2880x1800 resolution since every point is represented by more then one hardware pixel. Before retina displays, the mapping was usually one point = one hardware pixel. After retina displays, its more complicated.
So yes, its running 1440x900, but displays it using 2880x1800 pixels. The OS is aware of this discrepancy of course and is able to use higher-quality assets to take advantage of the available higher spatial resolution (just as caramelpolice describes). An app that is retina-aware can also check whether the system is running in HiDPI mode and adjust its custom drawing appropriately.
[doublepost=1526897589][/doublepost]
"Supersampling" is pretty exclusively used to refer to, as you mentioned, drawing at a higher resolution and scaling down.
This is why I said it was the same idea as supersampling — drawing at a higher resolution than the target resolution (the target resolution is still the logical resolution aka the resolution in points!). Of course supersampling in itself is technically different, since it contains the obligatory resolve step. I still think its a good analogy — you take advantage of the fact that the display can draw at subpoint accuracy. But yes, its very easy to loose the common ground here, sorry if this caused confusion.
But this is a software implementation detail in practice, and is mostly only used for UI - things like video playback, 3D rendering, and so on can address individual pixels as they always have. From an end user's point of view, what matters is that they are not seeing a lower-resolution image that is then "blown up". Every pixel of their display is taken advantage of and can be directly addressed.
Well, it depends on what your software does. Legacy apps do only see the logical resolution. Like, if you query a full-screen OpenGL context, you will get a 1440x900 buffer etc. So if you do custom drawing you have to take care of the point/pixel discrepancy manually (of course if you use system APIs, they will do it for you automatically).
This opens a world of exiting possibilities where one can combine the best of two worlds. For instance, in the 3D software I am writing, I draw the 3D scenes at reduced resolution (to allow for better performance), while I draw the UI at full panel resolution. This gives you decent graphics with good performance, and the super-crisp UI. Unfortunately, almost no game bothers doing it the right way...