A couple of observations from perusing through v6 results:
- A lot of people tested their iPhone 14 pro
- Older systems are quickly outclassed
- Newest AMD/Intel hitting above 3,000 single/20,000 multi and scoring much better than latest Apple chips - but those seem to be desktops
- In the grand scheme of things, Apple Silicon is in a tighter range of results than Intel/AMD
I'm eager to see how Apple's 3nm offerings fare.
I wouldn't characterize things this way.
My quick summary ( see eg
https://www.computerbase.de/2023-02/geekbench-6-die-neue-benchmark-suite-im-leser-benchmark/ ) is
- The fastest available x86 cores are running single-threaded at around 10 to 20% faster than M2.
- The fastest available x86 multi-core designs are running at the same about 10 to 20% faster than M2 Pro/Max in the same sort of price range. So eg the closest match to M2 Pro/Max is an Intel i5-1300 design with 6(*2)+8 cores. That's against Apple with 8+4 cores. Apple wins on more "big" cores, but Intel has SMT and more E-cores which are substantially more powerful than Apple's E-cores. Yet overall we have a draw. To get much better than Apple you need to go to 16(*2) P cores (AMD) or 8(*2)+16 cores (Intel).
I think this is a very good showing on the Apple side, especially since, as I keep trying to point out, the A15 (and thus the M2) were not designed to be great performance improvements on the A14/M1, they were designed to be substantially more energy efficient. The performance improvements will come with the N3 design, both the expected process improvements and the expected IPC improvements. There's just no way Apple can't get more than 20% IPC boost (ie matching the best current x86 single-threaded)! I mean I can (and have) listed a number of practical improvements to this end.
(The reason the schedule was derailed, giving us the apparently boring and disappointing A16, is simply covid which leads to delayed N3. The good times are returning!)
I'm honestly more curious these days about how GPU will play out. I've now investigated enough GPU details to conclude that Apple is on a path there very much like the CPU path. The starting point was somewhat different (more like a traditional GPU, probably because they started with Imagination IP) but over time the standard Apple package of throwing intelligence, not just raw frequency or MOAR GPU CORES! to the problem, has been applied. The most recent public GPU patents (around 2020, 2021) show some really cool ideas that, while they will not show up in basic dense linear algebra tests, should substantially speed up both real graphics and many real-world GPGPU compute use cases (graphs, sparse linear algebra, FFT, stuff like that).
I'd like to see more cross comparison of Apple GPU results against other GPUs, but I have not yet found such a resource on the net. Is there a secret GB6 browser site that's not yet visible to google search?