You seem to be suggesting that I'm wrong, but never exactly say what about, and then you say the same thing as I did in a very round about way with some weird analogy involving a bike thrown in for good measure. I still don't know what these "myths" about driving displays you're referring to. It's pretty well understood phrase.
Sorry if I was not clear enough. The phrase 'to drive display' is ambiguous between 'is able to output a video signal at a specific resolution' and 'is able to deliver reasonable performance at a specific resolution'. The second statement cannot be easily generalised because performance depends on the usage scenario. This is why the phrase 'to drive a display' is often confusing and misleading — e.g. Intel IGP will happily run with a 4K monitor but it will obviously struggle if you attempt to run a game under full 4K resolution.
To illustrate the confusion a bit better, take the OP's original question: is the GPU always driving the 2880x1800 resolution? It is, because it will always output the video signal at that resolution, but that is absolutely orthogonal to the amount of work the GPU needs to perform when, say, drawing a game. It is entirely possible that it draws a game at 1024x786 and still output the video signal at 2880x1800. The crucial thing to understand is that the GPU is not drawing directly to the display. It is drawing to a series of memory buffers of different resolutions, which are then combined by the OS in complicated way so that a final picture can be produced.
Again, you're implying that I'm not quite correct without ever stating what you think I'm incorrect about, and then you pretty much repeat what I said in a very round about way.
Again, sorry if I wasn't clear enough. Your post suggests that image scaling is the main reason for suboptimal quality when drawing to non-native resolution of a classical LCD. I wanted to point out that this is not entirely correct.
I never mentioned anything about "linear interpolation done by the GPU texturing units" (not sure even you know what you're talking about there)
Frankly, if you are unfamiliar with linear interpolation or texturing hardware then maybe talking about image rescaling is not such a good idea. Especially since you are clearly suggesting that doing scaling on GPU is higher quality then using a specialised DSP chip. To make statements like these you should at least understand how rescaling is performed in hardware and what is the difference between scaling done on the GPU vs scaling done by a dedicated DSP.
CRTs have absolutely nothing to do with HiDPI implementations - where you're getting this info, I'd be really curious to know. You're going to have to be a lot more specific on what I wrote about Apple's HiDPI implementation that doesn't make sense.
I never said that CRTs have anything to do with HiDPI implementation. I was merely stating that color CRTs and hi-res LCDs have a similar hardware feature — small granularity pixels. This is reduces distortion from image rescaling and ultimately allows these displays to work with a wide range of resolutions without severe quality degradation
That might be the way it works with some games, but testing the Uningine Valley Benchmark, if I set the game resolution to 2560x1440 in windowed mode, it takes up the whole screen on my 4K set to 2560x1440 HiDPI mode. So in that case, either the game or OS X knows to scale the window as well rather than literally drawing 2560x1440 pixels.
Its actually quite simple. When you set your system to 2560x1440 HiDPI mode, the OS (and the games) 'see' the display as having resolution of 2560x1440 logical pixels. For non-GPU-intensive applications, the OS will back each of these logical pixels by a 2x2 grid of physical pixels — this happens completely in a completely transparent fashion to the application, which still thinks that it is drawing to a single pixel. Namely: if the app asks for a 100x100 window, the OS will allocate a 200x200 buffer but present it as a 100x100 one to the app.
However, if the application requests GPU-intensive features (e.g. an OpenGL context), the OS will attempt to optimise and reduce the resolution of the buffer. So when asking for a 100x100 window with OpenGL acceleration, you will actually get a 100x100 pixel buffer. The OS will then take care of all the rescaling so that the image still appears a correct size (200x200) on a HiDPI display.
Of course, the application can use specific APIs to realise that it is actually dealing with a HiDPI display and ask the OS to adjust its behaviour. For instance, a game could ask for a high-res OpenGL buffer (that is essentially what SC2 does)
Furthermore, whether the display is in 2560x1440 HiDPI mode or low-res 2560x1440 mode (i.e. no HiDPI), the FPS appears to be the same. I don't know if that means the game application window is bypassed by OS X's HiDPI scaling and therefore there's no hit on performance while using a HiDPI screen?
OS X is able to recognise and optimise certain drawing scenarios. Performance-wise, it would make sense for it to step back from the default super-sampling drawing if a game is drawing to the entire screen. However, I am not aware whether they actually do that kind of optimisation. In your case, FPS might be the same because (as mentioned above), the additional rescaling step is fairly cheap on modern hardware. At any rate, the game is always rendering to a 2560x1440 buffer (with the OS optionally doing one or two rescaling steps afterwards).