Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I’ve heard reports on other M-series chips being hard to get up to a point that they would throttle at all. Or even spin up the fans under load. Not sure about the M3 but my M1 Max Mac Studio doesn’t even spin up the fans playing a AAA game like Resident Evil Village in 4K high settings.
Oh definitely , I wouldn't think it would throttle much at all in your studio , just was asking about the reverse actually. If an M2 Max/ultra in a studio would still be faster over long loads due to not throttling vs a M3 Max in a macbook throttling at times (maybe they don't ever throttle at all in a laptop either , just thought I had seen a few videos where they did when pushed in laptop form)
 
Seems more clear why Qualcomm did the paper launch for their Snapdragon X Elite recently even though products won't be available until middle of 2024. With a GB score of ~2900 ST and ~15K MT (at 80W) it doesn't look as good against the M3 Max and even worse against the M4 Max, its real 2024 competitor. Unlike GB real world apps running on WoA would have the additional penalty of emulation.

It's unrealistic to expect that the first chip from QC/Nuvia would be able to compete with a Mx Max on all metrics (no matter where a couple of their engineers came from). It would take generations, investment and expansion of QC's chip division.

However, the PC space have been struggling with un-fit for purpose ARM processors that can barely handle a Chromebook depending on cloud stuff, if the Elite can provide a fit-for purpose PC ARM chip that'll run Windows ARM at an actually decent power/performance then they have a good start.

If they compare themselves to Apple it's just marketing drivel, their target is to make a dent in the PC space not Apple.
 
Last edited:
  • Love
Reactions: jido
This makes me want to buy one even though have absolutely no use for it except for commenting on Macrumors.
Yet only the chips are being tested. I thought the Mac faithful would've realized we need REAL testing - such as the imbedded SSD drives.

The M2 Max/Ultra Mac Studio SSD's have much faster Read/Write speeds than the M2 Pro chip devices, so I'm curious if the M3 Pro/Max machines also got a Read/Write speed bump?
 
I expect alot of overheating here. Apple gone over 4Ghz because 3nm just to artificially increase score.
 
Technically, we don't know yet that it does. I've seen speculation that the M3 has the A17's GPU, but the A16's CPU. It's hard to tell so far from the data we have.
I don’t see how this isn’t obvious. Only the A17 Pro is on TSMC’s 3nm N3B process. The M3-series are on TSMC’s 3nm N3B process. No other TSMC 3nm process is in mass production besides N3B. Yes, they could have redesigned earlier chips to work on 3nm, but that’s unlikely. Why would they when they already have a perfectly good architecture that works with it? The second proof is that the GPU cores are using the same cores in the A17 Pro with mesh shading and ray tracing, features that don’t exist on any other A-series chip besides the A17 Pro. The M3 series also has an AV1 decoder in its media engine, something that was only added with the A17 Pro. We know without a doubt Apple has based the M1’s on the A14 and the M2’s on the A15. The A16 doesn’t have any of the features Apple mentions for the M3 series.
 
If you are doing compiles, multicore performance is the most directly related benchmark to compile times.

On what toolchain? That's definitely not the case with .NET (which scales poorly beyond a few cores) or Swift (which does scale even to ten cores, but then drops off significantly).
 
The GPU speed of an M2 Ultra 76 cores is roughly that of a GPU somewhere between an RTX 4070 Ti and 4080, hardly something to sneeze at. I found this with a simple web search and found a Tom’s Hardware article on its speed. M3 graphics cores are what Apple concentrated mostly on when moving from M2 to M3, so I would guess the future M3 Ultra will rival top end Nvidia and might beat the highest end card since it will probably have 80 GPU cores along with each core’s individual improvements.

I would also point out the M2 Ultra’s GPU typically uses 45 watts of power with a peak of 90 watts compared to the over 500 watts of power Nvidia cards use these days. IIRC, the RTX 4090 can use up to 600 watts of power. The entire Mac Studio Ultra only has a 370 watt power supply for the whole computer while you’d need a 1000-1200 watt power supply to run a PC with a high end graphics card. Nvidias run so hot, they’ll raise the temperature of a room dramatically, the chief reason I gave my gaming PC to my son. Imagine what Apple could do with their GPU’s if they used even half the power Nvidia’s require. I have yet to hear the fans on my M2 Ultra Mac Studio and I’ve run some serious stuff on it that would make my MacBook Pro M1 Max fans scream. If I only have the power of a 4070 Ti but have a silent computer that doesn’t turn my room into a sauna, I’ll take it.

We have the first OpenCL compute scores for the 40GPU cores M3 Max: 92000

The M2 Max with 38 cores has an OpenCL score of 86000
The M2 Ultra with 76 cores has an OpenCL score of 121000

That means that the M3 Max in OpenCL compute is less than 10% faster than the M2 Max
Something does not add up, since the M3 has 5% more cores and 10% higher clock speed

Btw:
A NVIDIA desktop GeForce RTX 4090 scores 304000
A RTX 4080 Laptop GPU scores 195000

Even the M3 Ultra will be far behind.
 
Only the A17 Pro is on TSMC’s 3nm N3B process. The M3-series are on TSMC’s 3nm N3B process.

Microarchitectures are not tied to process nodes. Apple A9 was made on both 16nm and 14nm. Apple A9X was on a thirdly different 16nm process.

Yes, they could have redesigned earlier chips to work on 3nm, but that’s unlikely. Why would they when they already have a perfectly good architecture that works with it?

Well, for one, because TSMC seems to have trouble ramping up 3nm production.

I don't personally think that's what happened, but I've seen an article that seemed pretty sure of it.

The second proof is that the GPU cores are using the same cores in the A17 Pro with mesh shading and ray tracing, features that don’t exist on any other A-series chip besides the A17 Pro.

Yes, like I said: the speculation was that the CPU cores were the A16's, and the GPU cores only were the A17's.

 
We have the first OpenCL compute scores for the 40GPU cores M3 Max: 92000

The M2 Max with 38 cores has an OpenCL score of 86000
The M2 Ultra with 76 cores has an OpenCL score of 121000

That means that the M3 Max in OpenCL compute is less than 10% faster than the M2 Max
Something does not add up, since the M3 has 5% more cores and 10% higher clock speed

Btw:
A NVIDIA desktop GeForce RTX 4090 scores 304000
A RTX 4080 Laptop GPU scores 195000

Even the M3 Ultra will be far behind.
Check out the article I was referring to. They ran multiple tests, showing far faster performance. OpenCL isn’t even supported on macOS anymore and hasn’t for years. Tom’s Hardware’s analysis showed a performance between an RTX 4070 Ti and a 4080 for the M2 Ultra.
 
Microarchitectures are not tied to process nodes. Apple A9 was made on both 16nm and 14nm. Apple A9X was on a thirdly different 16nm process.
Actually, they are tied to process nodes. If it happens to work between 16nn and 14nm, that’s fine but Apple would test and fix any issues before sending it to production. Apple would have to revise the cores that were designed for 5nm for whether they would work on 3nm. You can’t just take a design that was created for one process node and assume it’ll work with another process. Apple is also clearly using the A17 Pro’s GPU cores. What possible reason would they have for using the A16 CPU cores when they can simply use the same CPU cores from the A17 Pro? The simplest explanation is usually the correct one. It’s a lot less work to simply use what was already designed for a 3nm process than to have to revise a design that wasn’t.
 
You can’t just take a design that was created for one process node and assume it’ll work with another process.

No, but you can adapt a design to another process. And Apple has done it before.

Apple is also clearly using the A17 Pro’s GPU cores.

Yes. For the third time, nobody is arguing against that.

And I also personally think the CPU cores are the A17's.

What possible reason would they have for using the A16 CPU cores when they can simply use the same CPU cores from the A17 Pro?

Yield issues.

 
  • Like
Reactions: zapmymac
I’ve heard reports on other M-series chips being hard to get up to a point that they would throttle at all. Or even spin up the fans under load. Not sure about the M3 but my M1 Max Mac Studio doesn’t even spin up the fans playing a AAA game like Resident Evil Village in 4K high settings.
My M1 Pro 16” never has audible fan noise even when rendering. I also haven’t seen throttling where the render progress slows.
 
After factoring in exchange rate, Australian Government taxes and regulation by ACCC that Apple provide seven days  Care plus two year warranty, the MBA 💻 M3 Pro has a starting price of:

View attachment 2306034
Sucks to live in a highly taxed and regulated country. Enjoy your liberties my American cousins.

You must be joking.

The US price for this model is USD3,499. Which is at today's google exchange rate A$5,442, when you factor in our (Aus) 10% sales tax (which is always included in the displayed price) you get A$5,986 (i.e. within $15 of Apple's price in Australia noting that a small change in currency rate would make this positive or negative difference probably every other day). Further, considering that the US sales prices doesnt include any sales tax and that many US states do in fact have a sales tax. It is clear that their is no increase in pricing for our region at all. But thanks to our regulations we do get longer support and warranties.

Also misleading to Americans this talk about freedoms and high taxed, as our conservative party are more in favour of raising our sales tax than any other party.
 
"The new 16-inch MacBook Pro starts at $3,499 in the U.S. when configured with the M3 Max chip, while the Mac Studio with the M2 Ultra chip starts at $3,999, so you can effectively get the same performance for $500 less ..."

Which points to a pretty quick upgrade timetable for the Studio? Which was already looking overpriced compared to the higher end Mac Minis.
“ … so you can effectively get the same performance for $500 less ..." with a built-in display for free 😊
 
I seriously do not understand what would warrant a sad trombone in this instance. Please elaborate on why the new M3 Max chip being as fast as two M2 Max chips stitched together (M2 Ultra) would make a sad trombone play.

With respect to the "Pro" laptops: starting at 8GB in 2023 is terrible

With respect to the iMacs leapfrogging over M2 is weak

My base M2 Mini has TB4, while base M3 has TB3, hardware regression there. Although, 3 & 4 both support 40Gb/s ... if past Apple history proves anything they'll use version 3 versus 4 as an artificial cut off point for Mac OS upgrades in the future...

And then finally the 150/300/400/800 bandwidth of their unified memory is kind of ridiculous. Before it was 85-ish, 100, 200 or 400 throughput in versions one and two. They have the die shrink available to them, but they opted not to push the boundaries, and to me that seems lazy, as they own the whole stack now.

And finally, they rarely compared anything of real value to M2. It was all M1 or Intel stuff, so clearly they just are trying to lure you into an upgrade, all thepeople on old Intel hardware, and that's a money decision, not a technology, decision, and I thought we were told that, "we ain't seen nothing yet." They are just itching to drop support for Intel machines and it's gonna come sooner than later I think.

To me, it's clear that they artificially slowed things down since they were so far ahead of the competition, imo a sad trombone is warranted.

And really minor, they took away support for high impedance headphones... Really just give us a decent dac/amp which is already in my m2 base Mini.

I will have however, cut them slack for Wi-Fi seven, the lack of...

Finally, in the reading of multiple websites, nobody's running out to trade in their M2 machines handover fist, so it seems like initially it's going to be a slow roll until M4.

I absolutely could be wrong though...✌️
 
New score records!

Skärmavbild 2023-11-03 kl. 02.22.48.png
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.