Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I can hardly wait for a redesigned MacBook Air M series processor based on this A15.
I wonder how much better it will be compared to M1. I am not sure about 16GB RAM, maybe it is sufficient but I rather be safe. I deal with very poorly written applications at my work that has changed my thinking process.
 
Using those GB5 numbers (single core):

1631764033480.png


Not sure if you can look at the math this way - but of that 10% perf gain, most of it's explained by the clock speed increase. With a ~2.4% increase attributed to chip improvements...

(Wondering what this will mean for the M2/M1x)
 
Last edited:
Using those GB5 numbers (single core):

View attachment 1832544

Not sure if you can look at the math this way - but of that 10% perf gain, most of it's explained by the clock speed increase. With a ~2.4% increase attributed to chip improvements...

(Wondering what this will mean for the M2/M1x)
Nice to see someone's like me to take the effort to put it on a spreadsheet and post screenshots. 😂
 
Sigh...... 4 Page and 80 Comments.

This isn't "GPU" performance. This is GPU Compute performance. Geekbench doesn't test anything graphics. The 50% increase has abosutely nothing to do with Graphics.

The compute model uses Metal Compute, while I do not have time to fact check all the details, I would not be surprised if part of this run on the Neural Engine, which incidentally also have 50% increase in performance.
 
Using those GB5 numbers (single core):

View attachment 1832544

Not sure if you can look at the math this way - but of that 10% perf gain, most of it's explained by the clock speed increase. With a ~2.4% increase attributed to chip improvements...

(Wondering what this will mean for the M2/M1x)
Indeed. I will say that I believe the top engineer exodus to Nuvia is probably rearing it’s head by now, and will continue to do so in the future even if this year’s lack of IPC increases prove an exceptionally poor aberration.

However, if there’s one thing I like about Apple, it’s the playing coy as opposed to deliberate malfeasance in the form of boasting about performance increases mostly attributed to a modest clock boost in already modest (though IMO a bit too high for mobile but separate convo there) clock speed ranges. With speculated use of TSMC’s N5P process there may not even be any power increase over 3.1GHz with the A14’s Firestorm cores.

For perspective, this “disappointing” (and it is! No facetious intent on my part - we have high standards) year by Apple standards still took the iPhone from 1590 to 1730. We’ll go with a 10% increase in performance to be borne out on the M1X Avalanche Cores, mostly due to a clock speed increase of similar magnitude - maybe 3.2 (current M1 peak) to 3.33GHz by recycling the fractional increase of 3.1 to 3.23 from A14 to A15 that we just saw - and really they’ll plausibly go higher.

With this and no other cache increases besides maybe system level it’s not unreasonable to expect an N5P-fabbed M1X single core performance to top out at 1900+ GB5 at almost exactly the same power profile for a single core. Even for a disappointing year I do think it’s important to realize how utterly absurd (read: holy ****) that is - even before taking the power profile into account. Take that into account? Well, let’s just say I’m eager to see the 8 performance core M1X sustained performance and power consumption in contrast to Intel and AMD-sporting mobile products. AMD will put up a valiant effort, but neither will come close in the ability to sustain performance or reach these figures at 5-6 watts per core, unfortunately (since I am cheering both and loathe MacOS).
 
Well, this is the problem for all mobile SoCs, not an Apple exclusive problem. There reports of Snapdragon 888 overheating from recording 4k videos.
I’ve been switching between an Android and iPhone recently and I think the S888’s heat issues are massively overblown especially in light of the the actual products it’s fitted to e.g. larger devices with decent passive heat dissipation (for whatever reason), it is NOT akin to the 810’s disaster which is a common trope and complete ******** (as an aside). In contrast I’ve had bigger issues with the iPhone 12 growing absurdly hot to the touch whether charging or otherwise alongside overheating that I never found on the XS. My Android device does indeed boost to 2.84GHz in the UI as Qualcomm states is the maximum, but the A78’s also see a lot of use, and it makes me wonder if the hyper-efficient A78’s underpin a lot of the great multitasking and sustained performance I see on Android absent any notable heat that I see on iOS rather rapidly. I really love what ARM is doing with “big” cores a la the A700 series (and before someone says it, no, Apple’s little cores are not touching an A78 or A710 from ARM, lol, A76 yea kind of).

But it is still a mobile product and indeed no mobile phone excels on these parameters. All will throttle sooner or later for games, photo-intensive bs. I really wish Apple or Google would give us a sort of modular battery saving mode - I get it, they won’t allow us to undervolt the damn system (lol) but something more similar to what Samsung allows with directly controlling GHz to avoid throttling and such. Would be nice.
 
It’s ridiculous how many people commented “useless” without even understanding why you would want a faster GPU. If you don’t understand computers, stop commenting on technical articles.

Hint: A better processing unit means you can take better photos and videos, for example, which I’m sure you’d want.
 
Last edited:
  • Like
Reactions: George Dawes
Chip/GPU experts. A-series SoCs have had multi-core CPUs for a few generations now where some cores are performance and some are efficiency. Does the same concept make any sense in the GPU world in terms of possible power savings?

I don't play graphics-intensive games, I've never once since I got my first iPhone 4 felt any need for faster graphics for anything I do on my phone which is mostly music, messaging, looking stuff up and a lot of ebook reading. With Apple adding more GPU cores in multiple new SoC versions I'm wondering if there might ever come a time when, similar to how Apple currently says "6 CPU cores, 2 performance & 4 efficiency" we might one day hear "6 GPU cores, 5 performance and 1 efficiency" (for instance) where an efficiency GPU core could be all that is active as I sit staring at a static page on my ebook with the only screen changes for an hour or two being the page turn animations every 30 seconds or so.

A couple of clarifications before anyone jumps on me...

- I'm not a graphics expert so this might be a totally nonsense concept. That's why I'm asking the question.

- I'm not one of those people who thinks, just because I don't use a feature, it is worthless. I'm not dismissing the great performance leaps Apple's amazing engineers are achieving and Apple knows its target market so clearly there is a significant section of its customer base that does want to see these gains and more. I'm just wondering if a time might ever come where a similar high-low concept might come to GPUs so that both the performance enthusiasts and the battery-life obsessives can see significant gains.
 
All this speed is fine but stop and think what apps really take advantage of all this power my iPhone 12 pro max is extremely fast i use it for everything photo music streaming but is there a app that really takes full advantage im sure over time this will be fixed but all these super fast charging phones of today most people dont care about all that except can it make a call take pics watch youtube stream netflix and yes the phone of 5 years ago can do all that even my 128 gig pink iPhone se 4 inch phone can do all that.
 
  • Like
Reactions: Daino92
That beats the AMD Radeon Pro 555X, used in the low-end iMac from 2017, in a phone 😎

They have a 5-core and a 4-core GPU.

So, very likely, the M2 chip in the upcoming Macbook Air's, which is based on the A15 chip, will have 8-core and 10-core (as rumored) options.

The 4-core A14 GPU has a score of 8977

The 4-core A15 GPU has a score of 10608
The 5-core A15 GPU has a score of 14216

Which is kind of weird. 25% more cores result in a 34% performance gain? Awesome if true though.

The M1 8-core GPU has a score of 21168

The performance jump this generation is 18%

So the 10-core M2 GPU metal score would be somewhere between 30.000 and 33.500.

That's in the range of a AMD Radeon R9 M395X, the high-end GPU option of the late 2015 iMac.
 
Apparently no one “cares” because it does. But… if it didn’t… everyone would be freaking out and calling it a “fail”.
 
Make Android users cry?

I play games. Also do video editing when I’m out & about. Plus there’s the future-proofing that comes with extra performance.
Ya but the "games" you play have to also work on the $199 iPod touch that some parent bought their kid 6 years ago, rocking an A8. It's not like you're getting a better AI or anything. Maybe slightly better rendering? Hard to tell on a tiny screen like that. Best case, Can Crush will load a half second quicker with all the other hardware updates.

If you bought in to the marketing hype that the iPhone is some premier video editing experience then good for you. The seconds per day you save over any device released in the last 4 years will make a difference. :rolleyes:

Look this is all incredible technology. I understand what a technical accomplishment it is, but in the real world this is useless chatter.
 
Apple needs to be more permissive with emulation / virtualization so we can run PC applications with a wireless keyboard. Because seriously, what good is so much power if we can't use it? iPhone devices are already comparable with even mid-end notebooks at this point.

And no, Apple isn't just bluffing. I've seen a few iPhone models running emulators, and even old models can be ridiculous smooth (though older models might run into thermal throttle issues).

If they don't want us to tap into that much power, they might as well focus on better batteries. Proportionally (that is, when we compare power / battery), battery efficiency hasn't improved much in the last few generations.
 
$10,969.00 was MSRP and all of the stores near me central NNJ were not discounting.....it was special......like trying to buy a Corvette C8 when they came out...sticker plus $10k....."availability option"
The joke was because you listed the RAM in GB. When RAM back then was in MB. A few MB of RAM in 1990 cost thousands of dollars. A GB of RAM was unheard of.

The IIfx topped off at 128MB. 128MB would’ve been more than the average yearly income of multiple people in 1990 combined. Don’t care to pull out an old MacWorld to see exactly how much.
 
  • Like
Reactions: subi257
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.