Then I throw it back too: for this you have the single-core result, testing again for it with the multi-core is redundant.
I think the idea of the Geekbench 6 multi-core result is to give a realistic measure.
I guess you could argue that Geekbench 6 should really offer three results — the single-core result, the 5-like multi-core result, and a "realistic core utilization" result.
Probably a gazillion things neither of us haven't heard about. In science there are lots of projects that should run on gpu, but nobody has time and/or expertise to port it. Or just more cost effective to leave is as-is.
I would wager if it hasn't been ported to GPU, it probably also doesn't scale well to 96 cores…
And in software dev also depends on the project. Do you have a large monolith, or 100+ microservices? The latter will scale well.
Well, only if all of those microservices need a recompile, which they hopefully rarely do (or else you really have a monolith after all

).
Swift scales to many cores better than .NET does, but I think that's mostly a function of AOT. And even with Swift,
we see a typical Amdahl's law-like curve in the toolchain — 10 cores are only 11% faster than 8 (rather than 20%), and 20 cores are only than 30% faster than 10 (rather than, y'know, 100%).
I think the point of diminishing returns is important, including for the M3 Max: yeah, it reaches roughly the result as the M2 Ultra, but it needs a third fewer cores to do so. That means I'm skeptical about the usefulness of the Ultra even for most Mac Studio buyers, and I'm especially skeptical about the usefulness of the Threadripper.
Another complication here, though, is that I
think Threadripper offers a Turbo Boost-like mechanism, and I think Apple's SoCs so far
do not.