Doesn't that hinder performance?
Theoretically if its enough pixels to be "retina" one shouldn't need to render at quadrupled resolution and then downscale to actual resolution.
That means that rMBP is operating 3840* 2160 resolution - rendering that many pixels AND scaling them at the same time.
That means that "more realestate" option takes up a LOT of cpu.
IMHO, there are basically two reasons for why Apple is doing it this way.
First of all, this simplifies the programming API. OS X uses a parameter called the backing factor, which tells the program how many real pixels correspond to a single logical pixel. To illustrate this - a non-retina mode of 1440x960 has the backing factor of 1 (one logical pixel = one real pixel); while the HiDPI 1440x960 has the backing factor of 0.5 (1 logical pixel = 0.5 real pixel). Basically, if one wants to draw a line starting at (0,0) and going to (100, 100), one should first convert these values to the real pixel coordinates, e.g. (0, 0) - (200, 200) and then draw the corresponding pixels.
For most applications, this backing value doesn't even matter. OS X uses vector graphics almost exclusively to render its buttons and other controls; furthermore, the rendering of these items is handled by the OS itself. So applications which only use these standard items don't need any modification at all to work with HiDPI modes. Only in the case where the application does some custom rendering, a special care of the backing factor is needed. Of course, the same goes for displaying image data.
In earlier versions os OS X Apple was playing with fractional backing factors, e.g. you were able to set it to something like 0.25 which would result in an effective logical resolution of 720*480. To emulate something like 1680x1050 one would need a backing factor of 0.58333333333... Now, here one can already see some problems having to do with precision and rounding of pixel coordinate calculations. Another problem are pixel images - how do we treat these properly? This is why Apple simply decided to go for 'integer' pixel ratio. A backing factor is either 1, or 0.5. This simplifies lots of rendering algorithms (they only need to take care of these two special cases), essentially eliminates rounding problems - you don't get fractional pixel relationships anymore and also makes the handling of pixel images trivial - you simply provide two kinds of resolution, the normal and the HiDPI (2x2) one. Apple also made it very easy to upgrade existing applications with HiDPI pixel graphics, you simply add image files with the same name but ending in '@2' - the OS will pick them automatically and then do the rest. If you do some custom rendering in your application, you still have to add some code, but its not that bad price to pay for resolution independence.
With integer scaling factors, the only way to emulate other 'fractional' resolutions is to do a downscale as a post-processing step. E.g. render 1680x1050 as 2x2 scaled 3360x2100 and then downscale to the native resolution (whatever that is). This way you can essentially implement non-integer backing factors .
The second reason is that this way also enhances the quality image. I mentioned rounding problems with fractional backing factors. With a two-step rendering process, you first render the image to a buffer with a resolution high enough to avoid such problems altogether and then use linear interpolation to blend together corresponding 'fractions' of pixels. This is essentially super-sampling anti-aliasing that Apple does here.
Now, a 'proper' way to do resolution independence would be to abandon pixels altogether and just use units of space like cm etc. However, this would be a horrible mess for both the software and hardware. In this regards, Apple's solution is a clever hack which ensures that its easy to write applications which work - and look great - on both normal and HiDPI screens.