Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

senttoschool

macrumors 68030
Original poster
Nov 2, 2017
2,532
5,263
According to Anandtech's, Andrei Frumusanu, the GB Compute is too short to ramp up the M1 Max. This means there's zero chance that it'll ramp up the M1 Ultra correctly.

image.png


Source is in the comments here: https://www.anandtech.com/show/17024/apple-m1-max-performance-review/7

GPUs tend to scale almost linearly with the number of cores and the corresponding increase in memory bandwidth. However, this scaling does not happen for GB Compute from the M1's 8 GPU cores to the Ultra's 64 GPU cores.

This Macrumors article is quite wrong: https://www.macrumors.com/2022/03/17/m1-ultra-nvidia-rtx-3090-comparison/
 
Last edited:

iDron

macrumors regular
Apr 6, 2010
219
252
Why not accurate? Benchmarks are always just a specific metric for performance. If Geekbench 5 is computationally optimized for Apple silicon (which they said it is), then a low score just means it does not perform as well in this particular metric.
They might at some point want to update their method, so we will have a Geekbench 6, for those chips at the top end.
 
  • Like
Reactions: Analog Kid

sunny5

macrumors 68000
Jun 11, 2021
1,689
1,541
From my obsessive friends the difference in WoW same settings between 3090 and m1 ultra is 7-9 fps,
If that helps
I wish to see the result with Baldur's Gate 3 since it's the only new native game so far.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
919
1,115
Other GPUs also tested in a short period of time.
Very true. I suppose that for real-world compute tasks and gaming, the slower ramp up time of the M1 Ultra wouldn't be as much of an issue. Once you're gaming, you're going to be pushing the GPU for more than a few seconds at a time.

Geekbench is very burst heavy in comparison to a lot of the other benchmarks out there. It'll do a second or two of heavy work (and try to max out the computer's resources), then rest for a second or two before starting the next test. I suspect they're trying to avoid letting thermal throttling bias the CPU to look better at some types of tasks than others, which makes sense if you're trying to see exactly what a CPU or GPU is best at. But in terms of being an indicator of how thermal management works, it's never really been Geekbench's strong suit. Benchmarks such as Cinebench are much better for testing this kind of sustained performance under load.
 
  • Like
Reactions: ikir

upandown

macrumors 65816
Apr 10, 2017
1,257
1,248
According to Anandtech's, Andrei Frumusanu, the GB Compute is too short to ramp up the M1 Max. This means there's zero chance that it'll ramp up the M1 Ultra correctly.

View attachment 1975555

Source is in the comments here: https://www.anandtech.com/show/17024/apple-m1-max-performance-review/7

GPUs tend to scale almost linearly with the number of cores and the corresponding increase in memory bandwidth. However, this scaling does not happen for GB Compute from the M1's 8 GPU cores to the Ultra's 64 GPU cores.

This Macrumors article is quite wrong: https://www.macrumors.com/2022/03/17/m1-ultra-nvidia-rtx-3090-comparison/
You really seem to have cracked the case. I forgot Anandtech mentioned Geekbench is not a good metric for the Max. This totally makes sense.
 
  • Like
Reactions: ikir

MayaUser

macrumors 68030
Nov 22, 2021
2,766
5,867
I wish to see the result with Baldur's Gate 3 since it's the only new native game so far.
i think WoW must to be also native...i mean the difference is too small
I will ask and come back with an answear
 

sunny5

macrumors 68000
Jun 11, 2021
1,689
1,541
i think WoW must to be also native...i mean the difference is too small
I will ask and come back with an answear
Well that's an old game. I dont expect too much from it unlike Baldur's Gate 3 which is still in beta but with better support.
 

MayaUser

macrumors 68030
Nov 22, 2021
2,766
5,867
Well that's an old game. I dont expect too much from it unlike Baldur's Gate 3 which is still in beta but with better support.
yes, but the "benchmark" is accurate right? since WoW its an old game for nvidia 3090 also...so
I will ask if they can try BG3, if its in beta can be tested/played?
 

sunny5

macrumors 68000
Jun 11, 2021
1,689
1,541
yes, but the "benchmark" is accurate right? since WoW its and old game for nvidia 3090 also...so
I will ask if they can try BG3, if its in beta can be tested/played?
BG3 will release in 2023 so I would wait unless you wanna try beta.
 

leman

macrumors Core
Oct 14, 2008
19,202
19,063
According to Anandtech's, Andrei Frumusanu, the GB Compute is too short to ramp up the M1 Max. This means there's zero chance that it'll ramp up the M1 Ultra correctly.

Yep. In fact, this has been widely described and it irritates me slightly that folks still take GB compute results for these GPUs as even remotely representative.

Other GPUs also tested in a short period of time.

Yes, “other GPUs”. How dare there be differences between GPU behavior from different vendors, right? Apple GPUs have to switch between uktra-low power consumption and high performance, it’s likely it is done by checking how much time a task takes.

Are there any other GPU benchmark to test for proving your point?

GB compute is pretty much the only mature compute benchmark on the market. But the poor scaling and obvious discrepancy with other GPU tests that run longer jobs make this fairly clear.

Why not accurate? Benchmarks are always just a specific metric for performance. If Geekbench 5 is computationally optimized for Apple silicon (which they said it is), then a low score just means it does not perform as well in this particular metric.
They might at some point want to update their method, so we will have a Geekbench 6, for those chips at the top end.

Not accurate because it is not utilizing the hardware properly and produces a score that does not reflect the real potential of the hardware. No matter what their claims wrt optimizations are, the batch size/problem size issue is real and it affects only AS GPUs so far.
 

ikir

macrumors 68020
Sep 26, 2007
2,134
2,289
According to Anandtech's, Andrei Frumusanu, the GB Compute is too short to ramp up the M1 Max. This means there's zero chance that it'll ramp up the M1 Ultra correctly.

View attachment 1975555

Source is in the comments here: https://www.anandtech.com/show/17024/apple-m1-max-performance-review/7

GPUs tend to scale almost linearly with the number of cores and the corresponding increase in memory bandwidth. However, this scaling does not happen for GB Compute from the M1's 8 GPU cores to the Ultra's 64 GPU cores.

This Macrumors article is quite wrong: https://www.macrumors.com/2022/03/17/m1-ultra-nvidia-rtx-3090-comparison/
We already know with other M1 results, Geekbench compute is not accurate for Apple Silicon and often used to bash Macs on PC website/channels
 
  • Like
Reactions: Ace McLoud

Xiao_Xi

macrumors 65816
Oct 27, 2021
1,479
919
If companies were less marketing oriented, they would fix the benchmark mess immediately.

But, controversy makes headlines and I firmly believe this is part of Apple's marketing campaign.
 

iDron

macrumors regular
Apr 6, 2010
219
252
Not accurate because it is not utilizing the hardware properly and produces a score that does not reflect the real potential of the hardware. No matter what their claims wrt optimizations are, the batch size/problem size issue is real and it affects only AS GPUs so far.
There is a difference between accuracy and significance.

When you benchmark how long it takes to mulitply matrices, or how many you can mulitply in a given time, of course your benchmark is accurate as long as the code runs properly on the device.

Whether that is significant for real world application - well that is a different matter. Or if matrices of a different size would give you a different result.

That's why there is this plethora of benchmarks, each tell a different story and are being used to promote any device that looks like its ahead in this particular benchmark.
 

jmho

macrumors 6502a
Jun 11, 2021
502
995
Almost all benchmarks between Apple Silicon and standard PC architectures are going to be flawed because the architectures are so different. Good PC code is going to run poorly on AS and good AS code is going to run poorly on PC.

Even comparing well optimised PC code to well optimized AS code isn't truly fair because they're going to be doing different workloads under the hood. (Although obviously this is the most practical comparison for actual human beings and their purchase decisions)

It's like watching people try to compare a Tesla to normal petrol cars. Sure the Tesla will beat them all to 60 mph, but its not going to beat a Ferrari around a track. The question of "Is a Tesla as fast as a Ferrari?" doesn't really make sense without making the question far more specific.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.