Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In a small passively cooled device like a phone the thermal headroom is the bottleneck. This "thermal budget" will be spent on the GPU and ISP not the CPU. If the u-arch and process allows for better CPU efficiency it's better to keep performance the same to allow more performance in the blocks that will have more impact - GPU for gaming and ISP for computational photo and video. iPhone app performance has not been constrained by any application processor in the past decade.
 
Last edited:
MacBook Pros launching in Oct/Nov are likely based on A14. Next year's Air should perform well based on these numbers.
What makes you say that? The M1 was based on the A14 that was launched only a month or two prior. The same is likely for the A15 and M1X/M2.
 
1631749301393.png


So - a ~25% increase in per core GPU perf? (assuming they scale linearly). Maybe the 3Billion transistors were crammed into the GPU cores in some way?
 
There is some question about that result. I do wonder about the fact they’re going with a Mali GPU. My US version S21 Ultra has an Adreno and I think it’s been reported to perform better than the Mali one in other versions.

But according to this article the Pixel 5 had an Adreno GPU and that was a poor performer so going with Mali now is apparently a good thing?

It will be interesting to see real world results. I’ve seen tests from a couple of years ago where Samsung phones performed smoother and dropped fewer frames than the iPhones they were compared to at the time, despite Apple having the usual bragging rights on benchmarks. It may have been due to the Samsungs having better thermal management.

It doesn’t matter to me personally anymore. Bejeweled isn’t particularly taxing. 🤣
That doesn’t mean much. The Pixel 5 uses a mediocre chip. Equivalent to a flagship phone from four or five years ago. There’s a wide range of Adreno. The one in a Pixel 5 is nothing compared to your S21.
 
  • Like
Reactions: 5105973
Even if the increased GPU performance isn’t as useful for most content consumed or created on a phone screen, it’s good to see they are still able to make big strides in performance, because VR/AR will need all the GPU performance it can get. It will need very high resolution for even simple tasks, and because the visuals will take up a much larger part of your field of view, fidelity of graphical assets will be much more noticeable.
 
What about 35%? 9123 as % of 14216 it's like ~65 percent
That's not how the math on increased numbers works... You take the difference between the two numbers (14216-9123=5093) and divide that by the original number (5093/9123=0.558....) so you get a ~55% increase OVER the original number. You can say the original is ~65% as efficient as the new one with your math, but the new one is a ~55% increase in performance over the prior.
 
Last edited:
  • Like
Reactions: Cosmosent
Decided to Hold Off on ordering an iPhone 13 Pro OR 13 Pro Max, until AFTER I see some Geekbench CPU results.

Just read an Article on another Apple-focused website that details some speculation into why the CPU Perf Boost is so modest this year.

This might NOT be the year to upgrade, after all.

NOT for the prices Apple wants.

Especially with that 256 GB requirement to get 4K ProRes !
 
Cool but where the previous generations a slouch for gaming. Most games don’t utilize the GPU capabilities for the last few generations as developers want the widest market and design for the least specked iPad given some exceptions for certain titles.

The GPU in A12 iPad mini 5 with 3GB RAM handled things without hiccups. These improvements seems like for AR/VR along with ML processes nothing the end user will notice on a day-to-day basis.
 
Last edited:
  • Like
Reactions: Daino92
Decided to Hold Off on ordering an iPhone 13 Pro OR 13 Pro Max, until AFTER I see some Geekbench CPU results.

Just read an Article on another Apple-focused website that details some speculation into why the CPU Perf Boost is so modest this year.

This might NOT be the year to upgrade, after all.

NOT for the prices Apple wants.

Especially with that 256 GB requirement to get 4K ProRes !
I am in concurrence as I usually upgrade yearly and this year I am just not compelled though that Sierra Blue colour is tempting. I ordered the iPad mini and cannot wait to get my hands on it.
 
Decided to Hold Off on ordering an iPhone 13 Pro OR 13 Pro Max, until AFTER I see some Geekbench CPU results.

Just read an Article on another Apple-focused website that details some speculation into why the CPU Perf Boost is so modest this year.

This might NOT be the year to upgrade, after all.

NOT for the prices Apple wants.

Especially with that 256 GB requirement to get 4K ProRes !
I can already tell you what the single core scores will be, just look up the iPad Pro M1 scores and there you go. Both the M1 and A15 higher power cores are clocked nearly identical at 3.20Ghz. It’ll be around the 1700 mark for single core performance and very similar gains for multi core I would assume.
 
I can already tell you what the single core scores will be, just look up the iPad Pro M1 scores and there you go. Both the M1 and A15 higher power cores are clocked nearly identical at 3.20Ghz. It’ll be around the 1700 mark for single core performance and very similar gains for multi core I would assume.
The single core score definitely, but i highly doubt the A15 will match the M1 multicore performance.
 
What makes you say that? The M1 was based on the A14 that was launched only a month or two prior. The same is likely for the A15 and M1X/M2.
M1X was intended to launch earlier this year but was delayed due to mini LED. M1X as reported by Gurman, has a CPU/GPU core count that is an even multiple of A14, but not A15.
 
What makes you say that? The M1 was based on the A14 that was launched only a month or two prior. The same is likely for the A15 and M1X/M2.
X variants are always based on the same core microarchitecture as the regular variant. The A12X/Z used the same Vortex and Tempest core design as the A12, the M1 used the same Firestorm and Icestorm cores as the A14, so the M1X will presumably use the same cores as well. The M2 will likely use the Avalanche and Blizzard cores from the A15.
 
Metal benchmarks are pointless seeing as Apple has no competition other than older model units with older core designs and slower shared memory architectures.
Could compare it to the base M1 Air which scores ~19000 if I remember right with a 7-core GPU, metal to metal.

Nevertheless, compute vs compute based synthetic tests have their value as the tasks processed are probably the same or very close to that. Like a game could use OpenGL on one card+OS combo and Vulkan/DX/etc on another: if implemented 1:1 and the final image is the same, the faster one says something about it.

Also, if the M1 videos around are any metric, even on discrete GPUs that theoretically have 4x or 8x the synthetic GPU benchmark scores than the Apple’s ones, the effective real games FPS benchmarks are more like 2x or 3x at most.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.