It has a huge impact. Higher memory bandwidth and speeds are one of the main reasons GPUs are used for scientific computing and AI calculations. The entry level NVIDIA desktop card has 272 GB/s memory bandwitdh, and everyone is complaining about how low it is, and holds back the card.
So, to make it clear again: 150 GB/s is excellent for a CPU. It becomes meh, when it also has to handle a GPU that's supposedly could perform around entry level desktop cards. What makes it terrible that it's deliberately cut back from previous generations.
I'm not sure I disagree with any of that.
Don't compare it to Intel or Qualcomm.
At the end of the day, Apple competes with other laptop/desktop brands. How fast AMD, Intel, Qualcomm iterate factors into their decision-making.
Compare it to the M2->M3 change. I won't even say compare the Maxes, because they got more P-cores. Or compare to AMD. And I'd take a machine with M2 Pro over M3 Pro in a heartbeat. The benchmark probably most relevant to me is the PassMark Physics Test, and the M2 Pro beats the M3 Pro by 60% in that test.
…and that may be relevant to your use case, but for almost any other use case, the M3 Pro is faster.
And even when you're doing physics calculations, you don't do that all day. Those 60% only help you a fraction of the time.
Because those workloads can run on a toaster. Totally pointless to benchmark them then run around happily that your new cpu has faster cores, then don't feel a damn difference while actually doing that stuff. That's why I'm totally baffled at what Geekbench is trying to do with this new direction.
What they're doing is looking at how computers are actually used. Heavily multithreaded code is rare, and when it exists, it's often just moved to the GPU or NPU.
Yeah, software dev is lightweight on the CPU, contrary to what most people believe.
Well, it depends on the toolchain. I find that AOT toolchains make far more use of the CPU, and of multithreading specifically, than JIT ones do, for obvious reasons; JIT toolchains push much of the "compilation" step to the end user.
But, yes.
In 2016 I worked at a company where we got i7s - with HDDs. The IT was totally clueless.
Yeah, in that situation, I/O was going to be the way bigger bottleneck than CPU.