Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ppone

macrumors regular
Original poster
Sep 1, 2011
178
0
Does anyone know why Apple has not created a custom ARM chip or ASIC chip in order to drive the retina display. They implemented most of the rendering in the software. This consumes cpu and gpu cycles.

They should just created a very lower power chip whose job is to take a resolution and double it along both axises. That way the gpu thinks it is driving the display at 1400x900, but the chip converts that into 2800x1800.

There would be no lag and no wasted resources on cpu/gpu side in order to drive retina.
 
And that would completely invalidate any point of the retina display. What is the advantage of a hi-res display if you don't use the full resolution in the first place?
 
Does anyone know why Apple has not created a custom ARM chip or ASIC chip in order to drive the retina display. They implemented most of the rendering in the software. This consumes cpu and gpu cycles.

They should just created a very lower power chip whose job is to take a resolution and double it along both axises. That way the gpu thinks it is driving the display at 1400x900, but the chip converts that into 2800x1800.

There would be no lag and no wasted resources on cpu/gpu side in order to drive retina.

Different display sizes would need different hardware circuits. Additionally, to support 3 resolution levels like the 15'' MBPr does, you would need three different upscaling configurations. Finally, not everything is upscaled. Wallpaper is displayed at native resolution while UI elements are not. It seems overly complex for the gains you would get.
 
Does anyone know why Apple has not created a custom ARM chip or ASIC chip in order to drive the retina display. They implemented most of the rendering in the software. This consumes cpu and gpu cycles.

They should just created a very lower power chip whose job is to take a resolution and double it along both axises. That way the gpu thinks it is driving the display at 1400x900, but the chip converts that into 2800x1800.

There would be no lag and no wasted resources on cpu/gpu side in order to drive retina.

Because all 2D interfaces are rendered in software. That's also true on iOS.

It's just animations that get accelerated in hardware.

The reason iOS doesn't lag is because it doesn't have to deal with many different apps rendering to their windows at the same time, nor does it have to deal with a window manager system (Mission Control) that renders all current open windows in real time. You can check this by playing back a video in Youtube and then going into Mission Control. Notice how your video still plays in the window preview?

That and it's not like the hardware can't handle it. Apple just purposefully throttle the performance of the interface in favor of battery life and heat. They aren't using the hardware in the rMBP to their maximum potential.

It's easy to see because a 2.5GHz dual-core rMBP and a 2.7GHz quad-core rMBP lag the very same way. You'd think that with 2 cores and a lot more processing power that it wouldn't lag, but that's obviously not the case.

Install Windows 7 or Linux, and you'll clearly see that the system blazes through it all. But then battery life suffers tremendously.
 
They only thing they can export is the downscaling back to native res when you set something higher than best for retina. Since that is probably the least trouble it won't help performance regardless where it is done.

I wonder though how the op imagines his magic chip to work. If the CPU + GPU only worry about 1440x900, how would even a high res icon ever reach the ARM chip. Would be entirely pointless. One could just use a low res IPS screen to get the same results.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.