To me, there are quite a few sketchy aspects regarding the iPhone 6 Plus's graphics performance. If we think back to the iPhone 4, it had twice the amount of pixels as the iPhone 3GS had, hence, an approximately 100% increase in GPU performance was necessary to produce the same performance as the 3GS. The result was, that the GPU performance was generally good, but the difference was noticeable in some games like Modern Combat 2. It was even more prominent in the iPod touch 4G. When we look at the iPad 3rd Generation, the outcome of doubling the pixels went way worse. People don't really mention this a lot, but the graphics performance of the 3rd Gen was, at the end of the day, pretty bad. Power-wise, it was still an improvement over the previous generation (about 100%), but not high enough to power all the extra pixels, and the result was a decreased GPU performance. I suspect that this is the reason Apple released the 4th Generation iPad just half a year later. iPad 3rd Gen On-Screen Benchmarks: Source: AnandTech Fast forward to now, we have a similar situation with the iPhone 6 Plus. The iPhone 6 Plus has to render about 2.5 million pixels. The iPhone 6 has to render about 1.0 million pixels. The iPhone 5s has to render about 0.7 million pixels. This leaves the iPhone 6 Plus to render about 150% more pixels than the iPhone 6, or 350% more pixels than the 5s. Source: Apple Special Event Note that he is assuming a 1080p resolution, but the actual rendered size will be 1242p. Phil Schiller said that the GPU performance is 50% better than the A7 graphics performance. Source: Apple Special Event This gives the iPhone 6 roughly the same graphics performance as the 5s. However, according to my logic, the iPhone 6 Plus would need another 250% GPU performance increase to have the same performance as the iPhone 5s. The real life values will probably be much lower, as to be expected, but even halving that number still seems very high. Allegedly the A8 GPU has 6 cores (up from 4), so does anyone know if this significantly increases performance in a way that wouldn't have been factored into the 50% Schiller mentioned? If this was not enough work for the GPU, there is also the downscaling factor that needs to be taken into account, so every image rendered has to be downscaled using a multiplier of 0.9581, which doesn't sound like it requires absolutely no extra power. Note that this decreased GPU performance would still be negligible in common usage and would predominantly be revealed playing high performance games. I want to know, what is you guys' take on this? I'm by no means an expert and I just came up with this after some contemplating. Maybe my train of thought has some (lots?) flaws. Thanks.