Does anyone know why Apple has not created a custom ARM chip or ASIC chip in order to drive the retina display. They implemented most of the rendering in the software. This consumes cpu and gpu cycles.
They should just created a very lower power chip whose job is to take a resolution and double it along both axises. That way the gpu thinks it is driving the display at 1400x900, but the chip converts that into 2800x1800.
There would be no lag and no wasted resources on cpu/gpu side in order to drive retina.
They should just created a very lower power chip whose job is to take a resolution and double it along both axises. That way the gpu thinks it is driving the display at 1400x900, but the chip converts that into 2800x1800.
There would be no lag and no wasted resources on cpu/gpu side in order to drive retina.