Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
View attachment 2216228

And it's around 250% slower than the RTX 4090 in its best-case-scenario synthetic app LOL
Those are interesting benchmarks. We learn a couple things from them. The M2 Ultra hangs with the 4060 Ti while using OpenCL, which is not optimized or formally supported now on macOS. This is almost the opposite of a “best-case-scenario”. It’s like asking the nvidia and AMD cards to run on an Olympic track and the M2 Ultra to run on a rocky, gravelly, and wet dirt track. Add to this the fact that the M2 Ultra has an iGPU and this chart only makes Apple’s GPU look good.

Apple could support OpenCL and improve performance but without that, it’s hard to have fair comparisons.
 
Last edited:
  • Like
Reactions: spaz8
So, a few PCI expansion slots justify a 3K+ price gap between M2 U Studio and pro tower? Seems rich (at least to me).
 
  • Like
Reactions: DavidSchaub
Problem with Apples one size fits all approach is you have a chip that's very good for the average user but lousy for the pro market, how the hell do they even manage to get away with calling it "Pro", when there is nothing pro you can do with it, it's a good 20% or more behind the competition and the ram is a joke
 
  • Like
Reactions: spaz8 and DailySlow
Problem with Apples one size fits all approach is you have a chip that's very good for the average user but lousy for the pro market, how the hell do they even manage to get away with calling it "Pro", when there is nothing pro you can do with it, it's a good 20% or more behind the competition and the ram is a joke.
There is no competition. If you even come close to thinking that a Dell, Intel, Windows, Adobe, Nvidia, Cuda, Activision, Xbox, Android, Qualcomm, Samsung, Galaxy, Meta Quest thingy can compete with the Apple ecosystem, then you're not in the market.
 
Problem with Apples one size fits all approach is you have a chip that's very good for the average user but lousy for the pro market, how the hell do they even manage to get away with calling it "Pro", when there is nothing pro you can do with it, it's a good 20% or more behind the competition and the ram is a joke
I think part of the problem is that the market for a pro Mac desktop just isn't large enough for Apple to justify designing their own custom chip, so customers have to make do with a laptop chip that's been retrofitted for such a purpose. It's one thing when you are buying a chip from another OEM (who is the one to bear all the R&D and manufacturing costs), it's another when you are the one absorbing all the costs.

Laptops are still more popular, and Apple is right to focus their time and resources on what is more popular with their user base at the moment.
 
I think part of the problem is that the market for a pro Mac desktop just isn't large enough for Apple to justify designing their own custom chip,

Yup. For now, anyway.

Historically, it went something like “make a desktop CPU, scale it up a bit for servers, and have it trickle down to laptops”. But for Apple, economically, a different approach makes a lot more sense: “make a phone CPU, scale it up and add features for laptops, scale it up even further for the high end; scale it down slightly for small devices like watches”.

However, when you look at the memory controllers introduced in the M1 Pro, I do think there’s room for them to do high-end features, especially if they can eventually trickle down to being useful on the phone, but even sometimes if not — the Thunderbolt controller, say.

I wouldn’t put it past them to eventually do an M3 Quadra or M4 Quadra. But more specialized features like heterogeneous RAM, which work actively against the simplicity of their architecture? Probably a long shot.
 
No one ever upgrades one generation. You keep the computer 3 to 5 years then buy whatever is available at that time. Perhaps you g from an M1 to an M3 or M4.

It is the same with cell phones, very few people buy a new phone every year.

As for businesses, there is a 3-year period for capital depreciation. You can't write it off in only one year.

Apple does not bring out these new products every year hoping people will trade in one year old computers, there are plenty of people with Macs from 2019 and even older and also new customers

Finally, even if the new Mac is 20% faster, the job you are doing will not be done 20% faster. For example, the above text I just typed would take just as long if I used a new Mac Studio or my old 2014 Mac Mini.

Mostly what our Macs do is wait for the user to move the mouse or type the next character. Look at "activity meter", mostly you see the CPU is sleeping, even if you have 15 browser windows open.

A faster CPU only helps when you do some compute-intensive task where yu have to wait and can do nothing productive while waiting. Mostly this happens if you do media editing but sometimes also some kinds of engineering tasks. But even when I use 3D CAD, I only have to wait for the computer now and then, maybe for a render to complete. Mostly the computer is waiting for me to do something.
This is so true. The last time a computer significantly impacted my productivity was when I was an intern in 1997. I was making CAD renders and it took 45 min of the computer being locked up to make the render, where I couldn't do anything but wait. Today, that same render takes about 30 seconds, on any modern machine.
 
Yup. For now, anyway.

Historically, it went something like “make a desktop CPU, scale it up a bit for servers, and have it trickle down to laptops”. But for Apple, economically, a different approach makes a lot more sense: “make a phone CPU, scale it up and add features for laptops, scale it up even further for the high end; scale it down slightly for small devices like watches”.

However, when you look at the memory controllers introduced in the M1 Pro, I do think there’s room for them to do high-end features, especially if they can eventually trickle down to being useful on the phone, but even sometimes if not — the Thunderbolt controller, say.

I wouldn’t put it past them to eventually do an M3 Quadra or M4 Quadra. But more specialized features like heterogeneous RAM, which work actively against the simplicity of their architecture? Probably a long shot.
I’ll add to this something that a lot of people seem to just disregard on the PC side of things and that is performance per watt. Energy isn’t free unless you’re running a full solar array with battery storage (which of course costs a lot up front), so the PC chips that offer high levels of performance only do so when consuming huge amounts of energy. And then you need to also factor in part of the cost beyond the energy itself, is controlling for the heat energy released by these chips, so factoring in sound pollution from fans running at high speeds, plus the need for controlling the air within the environment the PC is in (air conditioning) has a cost as well. Of course Apple could do the same thing and ratchet up the watts their chips use for even more performance, but they clearly aren’t willing to go this route, which is a benefit for users, not a negative imho.
 
Wow, you’re going to be astonished by the jump in performance. If you have time and you feel like it, let us know how that machine feels.
He He - I went from a late 2014 (Intel 2 core) 8GB and DIY 1TB SSD to a M1 Mini 16GB 1TB SSD and it's a helluva jump - next year maybe a new Mini. I don't do production other than the odd video. However, I absolutely needed the jump to keep OS compatibility alive. I would think many others are in the same boat. The integration is the thing - iPhone, iPad, MBAir, M1 Mini at least are on the same "page" and that is very important to me.
 
Thanks for letting me know because I think the Studio is overkill.
Just another option to consider, you can buy 32" 4k monitors on Amazon brand new for $300. They are universally excellent. 2 of those together might be nicer than 3 24" monitors (especially if the 3 24" are not 4K). Then, you can drive them with the base M2 chip of the Mac Mini.
 
But you said you are casual user so why do you have M1 Ultra as a casual user? Isn't that a bit of a waste?


People often don't but if a company relies on driving these machines to the max then a 20% increase in performance may well be enough to pay for itself in time very quickly.

/edited for clarity
 
He He - I went from a late 2014 (Intel 2 core) 8GB and DIY 1TB SSD to a M1 Mini 16GB 1TB SSD and it's a helluva jump - next year maybe a new Mini. I don't do production other than the odd video. However, I absolutely needed the jump to keep OS compatibility alive. I would think many others are in the same boat. The integration is the thing - iPhone, iPad, MBAir, M1 Mini at least are on the same "page" and that is very important to me.
Then you had the same 2014 Mac mini (Intel i5 @ 2,6GHz) I’m still using. Same, 8GB of RAM and a Samsung SATA SSD. The computer still performs well to be honest, but as you said, I’m trapped in Monterey.

Hopefully I’ll be able to upgrade it next year with the M3 Mac mini.
 
Then you had the same 2014 Mac mini (Intel i5 @ 2,6GHz) I’m still using. Same, 8GB of RAM and a Samsung SATA SSD. The computer still performs well to be honest, but as you said, I’m trapped in Monterey.

Hopefully I’ll be able to upgrade it next year with the M3 Mac mini.
Pop a new SSD in it and try opencore. Should work great, and you have your other SSD for "proper" OS functionality.
 
  • Like
Reactions: DailySlow
In single threaded the M2 Ultra beats w9-3495X and Threadripper W5595X but looses to Core i9-13900K


[ The Threadripper W5595X is getting somewhat old. About 2 years old. The W7000 series should be coming by end of the year and Apple should be nervous about that one. ] . That "power savings" is a win for a single user workstation with a mix of single/multiple threaded apps running.


In multiple threaded stuff it looses to the server "hand me down" chips , but very close to the Core i9-13900K (win some / loose some ). So it is something kind of in the middle between a 'top fuel dragster' consumer chip and a server multiple user chip. For a single user workstation it is probably the right call. It isn't 'bad' at all. It isn't a Xeon or Threadripper 'killer' solution , but it really doesn't have to be.
This is an odd thing for Tom's Hardware to post because it's using the deprecated version of Geekbench--v5. They released a new version with a lot of explanation behind why there was a new version that was more appropriate to comparisons with today's hardware. Both the M2 Ultra (the only one we've seen tested so far) and the Core i9-13900K fare better in single-core scores on GB6, but the Ultra has better multi-core scores than the i9 on GB6, contrary to the GB5 scores they compared.
 
There are SSD slots, but proprietary SSDs needed.
No, they are not true SSD slots. The controllers, etc. are all part of the SOC, so they are just slots for raw memory modules. Totally different. Proprietary slots, yes.
 
This is an odd thing for Tom's Hardware to post because it's using the deprecated version of Geekbench--v5. They released a new version with a lot of explanation behind why there was a new version that was more appropriate to comparisons with today's hardware.

Which newer version????

"...

Geekbench update renders 6.0 and 6.1 results incomparable​

..."



Geekbench 5 allows folks to compare better longitudinally . The systems that many folks have bought probably have a GB 5 socre. So can look at your score for what you got versus the M2 Ultra. GB 6 ... errr what is it??? A 0.1 update means the scores go completely out the window! Errr. That is not going to be useful longitudinal tool.

GB is a benchmark that is actively out chasing bigger scores.
 
Which newer version????

"...

Geekbench update renders 6.0 and 6.1 results incomparable​

..."



Geekbench 5 allows folks to compare better longitudinally . The systems that many folks have bought probably have a GB 5 socre. So can look at your score for what you got versus the M2 Ultra. GB 6 ... errr what is it??? A 0.1 update means the scores go completely out the window! Errr. That is not going to be useful longitudinal tool.

GB is a benchmark that is actively out chasing bigger scores.
Ehhhh, it's not quite that. I'm not a fan of the benchmarks either, but there was a major change from 5 to 6; the 0.1 update isn't the same. There are plenty of newer scores using GB 6 for every one of the CPUs that Tom's hardware listed, and those could have *easily* been used in that article. There's absolutely no valid reason to have used the old version.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.