Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Also, according to data posted in this discussion Geekbench produced the scores with 13% difference between Linux and Windows while running on the same PC. Many people believe that Geekbench favors Apple.
I believe the differences are due to how Linux and Windows, macOS are designed.

Linux kernel is monolithic. Windows and macOS are based on a micro-kernel design. So Windows and macOS will have a disadvantage when dealing with I/Os, as drivers are user space processes. Linux drivers runs in kernel spaces, so it has less overhead when processing.
 
I believe the differences are due to how Linux and Windows, macOS are designed.

Linux kernel is monolithic. Windows and macOS are based on a micro-kernel design. So Windows and macOS will have a disadvantage when dealing with I/Os, as drivers are user space processes. Linux drivers runs in kernel spaces, so it has less overhead when processing.
I can't really speak to Windows, but XNU (the macOS kernel) is monolithic, not micro.

A monolithic kernel puts everything into a single address space, with communication between subsystems consisting of function calls. On the other hand, a microkernel is a tiny thing that's basically just a memory allocator, process scheduler, and messaging API, with all other kernel subsystems - device drivers, filesystems, network stack, etc - broken out into separate independent processes which communicate with each other (and userspace) by sending messages through the microkernel.

Lots of people get tripped up by the fact that XNU is partly based on Mach, and Mach is a microkernel, so that must mean XNU is a microkernel too right? But really XNU just uses Mach code as one of the building blocks in a monolithic design. There's no rule which says "when you borrow Mach's memory, process, and message passing code, you must push everything else out into separate processes".

All that said, a fairly recent development in XNU is increasing support for userspace drivers. This isn't being done for the sake of making XNU into a microkernel, it's to improve system security. Drivers are a common weak point in attacks against the kernel, so if you can isolate them into their own processes, a driver 0-day may not be able to compromise the whole system anymore.
 
All that said, a fairly recent development in XNU is increasing support for userspace drivers. This isn't being done for the sake of making XNU into a microkernel, it's to improve system security. Drivers are a common weak point in attacks against the kernel, so if you can isolate them into their own processes, a driver 0-day may not be able to compromise the whole system anymore.
Look how far we've strayed from the original topic, 6 pages in ;)
 
  • Haha
Reactions: Xiao_Xi
I can't really speak to Windows, but XNU (the macOS kernel) is monolithic, not micro.
Wikipedia describes XNU and NT as hybrid kernels.
1160px-OS-structure2.svg.png

 
  • Like
Reactions: quarkysg
Do you want to discredit a benchmark because it scales linearly with frequency?
I want to discredit a benchmark because it is limited in what performance aspects it benchmarks. I don't know what the rest of your comment is referring to.
 
I want to discredit a benchmark because it is limited in what performance aspects it benchmarks.
We don't seem to use discredit with the same meaning. I would discredit a benchmark that is unfair, not one that is useless for drawing general conclusions.

Do you also discredit all the app-specific benchmarks that Apple uses to promote its hardware?
1673259896792.png
 
We don't seem to use discredit with the same meaning. I would discredit a benchmark that is unfair, not one that is useless for drawing general conclusions.

Do you also discredit all the app-specific benchmarks that Apple uses to promote its hardware?
Pretty much. It is all marketing and should be taken with a very large grain of salt.
 
That's an interesting discussion. However, "better for CPU benchmarking" does not necessarily have anything to do with cross-platform. It may be good for benchmarking CPUs but still not useful for cross-platform comparisons.

It has everything to do with cross-platform. Andrei/Anandtech didn't say better for "Apple CPU benchmarking". He said CPU benchmarking meaning cross-platform. "A ton of the comments in this thread are people crapping on Geekbench because Torvalds said something about it 8 years ago or many other popular talking points which are just wrong. The more informed users here are a very minor group of the userbase and you just have to visit broader forums out there or even general subreddits such as r/amd or r/intel to see that OP's point of view of Cinebench is very much representative of what the broad public is interpreting in terms of benchmarks. That's the whole point of discussion."

With that said even Geekbench doesn't fully utilize GPU cores in Apple silicon with its short bursts. GFXBench or 3DMark are better for GPU benchmarking.
 
With that said even Geekbench doesn't fully utilize GPU cores in Apple silicon with its short bursts. GFXBench or 3DMark are better for GPU benchmarking.
Apple GPU cores use more parallelism and lower clock speed, combined with the need for very high occupancy (96 simds/core). Nvidia GPUs max out at 48 simds/core. In addition, Apple GPUs quickly throttle to a low-power state. You must keep them fully occupied and not leave gaps. Finally, there's the absurd latency to communicate over "shared" memory.

One wierd things was the M1 Ultra on Geekbench. It's only 1.5x the M1 Max, perhaps because UltraFusion isn't perfect? Alternatively, I've seen a possible CPU bottleneck in OpenMM reflect this sub-linear scaling. Driver bottlenecks place an ultimate limit on performance, similar to Amdahl's Law. Or it's just the 48-core model is the only one tested.
 
  • Like
Reactions: Xiao_Xi
Apple GPU cores use more parallelism and lower clock speed, combined with the need for very high occupancy (96 simds/core). Nvidia GPUs max out at 48 simds/core. In addition, Apple GPUs quickly throttle to a low-power state. You must keep them fully occupied and not leave gaps. Finally, there's the absurd latency to communicate over "shared" memory.

One wierd things was the M1 Ultra on Geekbench. It's only 1.5x the M1 Max, perhaps because UltraFusion isn't perfect? Alternatively, I've seen a possible CPU bottleneck in OpenMM reflect this sub-linear scaling. Driver bottlenecks place an ultimate limit on performance, similar to Amdahl's Law. Or it's just the 48-core model is the only one tested.
Even GFXBench shows the ultra only about 1.5X faster than the Max.
 
My guess, people just get the cheapest Ultra with 128 GB memory. Many people have similar configs for MBP: 24-core M1 Max, 64 GB memory. I have it the other way: 32 cores, 32 GB because I only care about the GPU. Wanted to get 512 GB SSD but luckily went with 1 TB.
 
I thought of how to make Cinebench fair for M1. Perform SIMD NEON vectorized operations on collections of 8-bit signed integers, then stream them into a certain JIT compiler. Next, use high-performance L1I caches to function-call into a library object. Use "multicore acceleration" to perform several function calls in parallel. The base M1 will match the i9-13900K and the M1 Max will far surpass it. M1 Ultra will be...the most ultra CPU ever benchmarked. Intel and AMD will have to publicly admit defeat and switch to ARM because of its superior ray tracing performance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.