Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That’a what’s keeping me happy right now. I stream on Twitch five days a week and with Intel CPU I have access to all the PC games I could want to stream.

I mostly stream Switch games but even still.
Well yeah, using a Mac primarily for games is an awful idea. My M1 Mac mini plays csgo at horrible (30fps) framerate on min or med settings. My Mac Pro with an RX580 used to only do slightly better, but booting into Windows, it'd get 200fps instead. So it's a compatibility issue, and I'll bet the Mac Studio doesn't do much better.

Playing games on Mac used to be an ok option, and somehow it got worse over time, probably cause Apple declared war on OpenGL. Good thing I don't really play games anymore.
 
That Threadripper needs 64 cores rather than 20, which also means you’ll be achieving that performance a lot less.

It also has a TDP of 280W, whereas an M1 Ultra will typically draw a third of that.
Yeah, these tests are usually either 1 core or all the cores you have, but it's pretty common for a workflow to be somewhere in between, like 2-4 cores where the M1 non-pro/max/ultra will still do well. Google Meet seems to just peg two cores at 100% when it runs, it's lovely.
 
  • Like
Reactions: NetMage
That Threadripper needs 64 cores rather than 20, which also means you’ll be achieving that performance a lot less.

It also has a TDP of 280W, whereas an M1 Ultra will typically draw a third of that.
Agreed. People sometimes assume that power efficiency doesn't matter on the desktop, but it absolutely does. Apple's approach allows them to run all cores at almost full-clock speed when fully loaded.

X86 can't do that. At least not in their higher-core-count CPUs. Thermal throttling kicks in, and power consumption doesn't scale linearly with clock speeds.
 
Does anyone have any idea how was the maxed out MACBOOK PRO M1 MAX
10-Core CPU / 32-Core GPU / 64GB Unified Memory

Compares to:

MAXED OUT STUDIO MAC

Apple M1 Ultra with 20-core CPU, 64-core GPU, 32-core Neural Engine / 128GB Unified Memory

How much faster is the Studio Mac?
Are there any benchmarks or comparisons anywhere?
 
With that kind of performance, why can’t we just use Rosetta to run Windows 10?
Good grief my 18 yr old G5 can run Windows XP pretty fast with Connectix Virtual PC, the new M1 Ultra could probably emulate my G5 running Virtual PC faster.
 
  • Like
Reactions: mburkhard
Does anyone have any idea how was the maxed out MACBOOK PRO M1 MAX
10-Core CPU / 32-Core GPU / 64GB Unified Memory

Compares to:

MAXED OUT STUDIO MAC

Apple M1 Ultra with 20-core CPU, 64-core GPU, 32-core Neural Engine / 128GB Unified Memory

How much faster is the Studio Mac?
Are there any benchmarks or comparisons anywhere?

In heavily multithreaded workloads, it would be about double in terms of CPU performance. Single threaded performance is about the same, so if you aren't saturating the M1 Max, the M1 Ultra isn't going to really bring much of a boost.

In terms of RAM, 64GB is already a very decent amount of RAM. Doubling that won't really accelerate tasks unless they actually need (and can use) that kind of RAM. It depends very much on the workload.
 
When one M1 Max isn’t enough, just slap two of them together and call it the M1 Ultra!

But seriously, these are some very impressive initial benchmarks. Can’t wait to see what the benchmarks are when it’s officially released!

Side note: why “Ultra?” Isn’t “max” supposed to be the finite best? Whoever comes up with Apple’s naming schemes needs to get a dictionary.
Not really. Latin "maxime" means "most", and "ultra" means "beyond", so Ultra is "beyond the limit", with the implication that it transcends the maximum.

The hint of the meaning might also be in the expression "non plus ultra" - "nothing more beyond".


Where I went to school, brothers were given the suffix "major" and "minor" after their family name, indicating the elder and younger of the two. We actually had 3 brothers, "major", "minor" and "minimus" , and if there had been a fourth, I suspect he would have been "Maximus"...not sure if there would have been a "ultimus" if there were 5 brothers!
 
Last edited:
Well yeah, using a Mac primarily for games is an awful idea. My M1 Mac mini plays csgo at horrible (30fps) framerate on min or med settings. My Mac Pro with an RX580 used to only do slightly better, but booting into Windows, it'd get 200fps instead. So it's a compatibility issue, and I'll bet the Mac Studio doesn't do much better.

Playing games on Mac used to be an ok option, and somehow it got worse over time, probably cause Apple declared war on OpenGL. Good thing I don't really play games anymore.
That’s the thing, with a i9-10900k and 6900XT, it might not be nearly as powerful as a Mac Studio, but it gets my work plenty done and I can play every game I’ve got on highest settings under Windows…

So I’ll definitely be happy for now!
 
Worse than usesless, it's misleading.

Most benchmarks are merely useless because they're generally measuring "turbo speed".

A fair benchmark would "warm up" for 2 minutes then run for 10 minutes and see how many loops (of slightly modified operations to prevent RAM cache cheating seen in FPS game benchmarks) the benchmark can perform.


I don't use Premiere every day but when I use it, I use it all day long. Any time I take a break from rendering, the first couple minutes are always faster than the next couple of hours because the machine heats up. Yet almost every benchmark I've ever seen completes within 5-90 seconds, rendering the whole simulation rather pointless.

Best yet, give me a benchmark that warms up for 30 minutes and then renders from minutes 31-60. That'd be more accurate and useful. The difference between 5 and 10 seconds being "twice as fast" is very little compared to something that takes 2 hours vs 4 hours or worse, 20 hours vs 40 hours.

GeekBench is just a glorified pissing contest.
Benchmark warming up wouldn't help. Then the temperature of the environment starts to matter. With all those possible factors, I'd rather the benchmark just have as many things controlled as possible, and that's what Geekbench is. You can judge for yourself how well the machine would perform in your environment for your workload. Beyond that, try it yourself.

Also, I get that this is a big issue for laptops, but if you're talking about a destkop computer, idk if thermal throttling occurs other than to the extent that turbo boost won't activate. The reason my old MacPro5,1 outperformed my work-provided significantly faster Intel MBP (2019 16" i7) was, my room is stupidly hot and has no AC, and the MacPro is fine with that.
 
Last edited:
  • Like
Reactions: wyrdness
Cool score, but don't get too excited.

This would seem to indicate that this wipes the floor with the 12900k for example, but that isn't supported by other benchmarks. Geekbench is great for comparing equivalent archs, but thats about it.

In this case, 24k for the M1 Ultra vs 17k for the 12900k.

But then use a more real task benchmark, like Cinebench (also not perfect), and it will show a different story. A M1 Pro/Max will score around 12k, but a 12900k will score 27k. So double cores to M1 Ultra and you would expect around 24-25k with perfect scaling. Recent Alderlake Mobile CPU releases also show that they can significantly outpace a M1 Pro or Max with even less cores than the desktop high end part.

Keep in mind this is a 16P/4E vs 8P/8E comparison and it brings into perspective the per core lead of the 12900k, the 12900k has way less performance cores. Its just the 12900k's performance cores are so fast, as are its efficiency cores.

Alderlake gulps power for sure, but we're talking benchmarks here. I highly doubt we will see real world benchmarks showing a lead for the Ultra VS a 12900k. The Ultra will do well, but just as with the Pro/Max it will not lead the pack, except for efficiency.

Also note price. It is a $1400 USD upgrade to jump up from an already $2k USD base model Mac Studio in order to get those extra 10 cores. The 12900k is $500 USD atm. The ULTRA is an insane price, especially if you dont need the GPU grunt your getting for the cost.
 
I occasionally see this argument, and it’s never backed up. Which workload in particular is arch-specific? https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf
The fact that it underscores alderlake desktop so hard is an example of how it is not faultless. Geekbench puts it less than 50% faster, yet Cinebench shows it outperform at over 2x. Im not going to go compile and entire list of benchmarks but it is an example.

Even within the same arch it can also falter, for example in the GPU performance of the M1 Max. Most benchmarks show roughly linear increases in performance, yet we're only seeing about a 50% increase over the Pro with twice the cores. I believe I remember this being attributed to boosting behaviour but regardless it is an inconsistency which brings into question the entire dataset. What else is it misreporting.
 
  • Like
Reactions: mburkhard
The fact that it underscores alderlake desktop so hard is an example of how it is not faultless. Geekbench puts it less than 50% faster, yet Cinebench shows it outperform at over 2x. Im not going to go compile and entire list of benchmarks but it is an example.

No, that’s just an example of Cinenencg being less parallelized.



Even within the same arch it can also falter, for example in the GPU performance of the M1 Max.

What does the GPU have to do with the arch? It’s neither ARM nor x86.

Most benchmarks show roughly linear increases in performance,

“Most benchmarks”? Where are you seeing all these M1 Ultra benchmarks?


 
I already ordered my Ultra. Vectorworks is finally going to fly. Does anyone need a Razor eGPU with an AMD Radeon Pro X 7100? I have one I need that needs to move on by the first of April.
Only vectorworks could need a processor so powerful to open in less than 5 minutes.
 
Even within the same arch it can also falter, for example in the GPU performance of the M1 Max. Most benchmarks show roughly linear increases in performance, yet we're only seeing about a 50% increase over the Pro with twice the cores. I believe I remember this being attributed to boosting behaviour but regardless it is an inconsistency which brings into question the entire dataset. What else is it misreporting.

I believe the actual issue was that the benchmark did not run long enough for the G13 GPU cores to ramp up to full performance...?
 
Meanwhile, the same Mac13,2 model also leaked a GPU (OpenCL) score of 77164 (Notice that this is a 48c GPU, not 64c, but even the 64c will be around 90000 only if we see the M1 Max 24c vs 32c results)... Which is absolutely disappointing. W6900X they're comparing scored ~117000 in OpenCL tests, so I wonder why the M1 Ultra scored so low???
 
Meanwhile, the same Mac13,2 model also leaked a GPU (OpenCL) score of 77164 (Notice that this is a 48c GPU, not 64c, but even the 64c will be around 90000 only if we see the M1 Max 24c vs 32c results)... Which is absolutely disappointing. W6900X they're comparing scored ~117000 in OpenCL tests, so I wonder why the M1 Ultra scored so low???
Seems to be a lot of variance in Compute scores, so I wouldn’t read too much into them yet.
 
Exiting!

So, it’s down to thermal throttling and sustained performance for me, hand in hand with noise profile.

What’s the maximum speed one could get with TB4? Can it handle an external enclosure with 4xSSD’s running at maximum read (about 500mb/ps)?
 
Last edited:
  • Like
Reactions: Argoduck
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.