Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
long-time Apple customers would know this chart is BS at first glance. When the advantage is much more clear they give you the better charts and graphs (and the multipliers of course). this one is designed to convey an idea without outright lying.
I don’t get any sense of an attempt to ‘lie’ here. Some people pretty carelessly throwing around that word. Maybe you should take a moment to really examine what that chart does and doesn’t say.
 
I don’t get any sense of an attempt to ‘lie’ here. Some people pretty carelessly throwing around that word. Maybe you should take a moment to really examine what that chart does and doesn’t say.

I said without lying. Apple is trying to say here that the M1 Ultra has a better GPU than the RTX 3090 because it has better (or the same) relative performance using much less power. It's clear as day. It's not a lie - it's just disingenuous (choosing "relative performance" over just straight "performance").
 
long-time Apple customers would know this chart is BS at first glance. When the advantage is much more clear they give you the better charts and graphs (and the multipliers of course). this one is designed to convey an idea without outright lying.
Well, I agree that Apple were not lying. Which is to say they were presenting data honestly, if perhaps incompletely by not showing the full performance range of the GPU.
 
I can appreciate this position for sure, but our teams are familiar with the three main OSes for the sake of development. A lot of our app work is moving to PWAs as interest is accelerating in this area and will likely continue to do so for a number of years.

I think you'd be surprised how many software development teams look at the M1 series and realise that though there are more powerful options out there and likely our workflows could benefit from the speed bump, but the overall package especially with the new gen MBPs...quite simply there's nothing like it on the market at the moment especially in this era of mobile working.

The fact that I can pop into the office and plug in, or stay home and get the same performance everywhere I am running on battery and still do better than the desktops we replaced them with is frankly astounding. We were running 3070s in our machines with 10th gen Intel CPUs and we're receiving on-par or better performance across the board.

Granted the Mac Studio is supposed to be a desktop machine and I realise the argument kind of breaks down with power/performance a little bit, especaiily in the GPU department, but for offices running 10+ machines at 400 - 700W a piece without a use case for GPU-intensive workloads...these machines would look mighty appealing!
I agree in terms of M1 for dev work. I actually think the flexibility of the Mac platform (I've been able to run Mac OS, unix, windows etc.. on 1 machine for years now) has been a huge selling point for devs.

Strangely enough, as power has been increasing I'm finding that (at least in the financial industry dev space where I work) that I'm not getting the advantages as we are increasingly running VM's. Whether it be AWS machines, Citrix etc.. They obviously have their benefits but they are nowhere near as fast as running a real machine of the same spec. But we dont do GPU intensive stuff so it matters less I suppose.

I cant remember the last time I actually developed directly on a real machine for work lol!
Its like 2 steps forwards, 1 step back in the dev world sometimes sadly.
 
  • Like
Reactions: VulchR
I said without lying. Apple is trying to say here that the M1 Ultra has a better GPU than the RTX 3090 because it has better (or the same) relative performance using much less power. It's clear as day. It's not a lie - it's just disingenuous (choosing "relative performance" over just straight "performance").
And because they chose the proper wording it wasn’t disingenuous.
 
It's all called marketing if you could make something sound real when it's super vague it triggers people to want something. It's like they've been reading that favorite book of Bill Gates how to make any numbers in your favor.
 
It’s only unclear if you think of Apple as some benevolent uncle.

If you think of them as a corporation looking to portray their product in the best light, it’s very clear.
When companies lie about benchmarks, consumers stop believing them. Case-in-point, when Apple initially began discussing the M1's performance, I didn't believe it at all. When third-party reviews more-or-less confirmed Apple's claims, I went "huh, I guess I should take Apple's claims more seriously next time."

Now I'm back to basically ignoring Apple's benchmarks. And I think it's a particular shame, as there are so many other impressive metrics they could have touted. Oh well.
 
Sure, but the whole point of Geekbench is to have workloads that are representative.

Here's what their "Compute" workloads do.

What is deceptive is actually claiming M1 Ultra not being faster than an RTX3090 simply by taking GB compute score into account. That means someone else could simply only show GFXBench Metal score and say that RTX3090 is only as fast as a Mac Studio with M1 Ultra. The Verge kept talking about how fast Mac Studio was during the entire first half of the video based on photo/video works and especially After Effects, but then at the second half they suddenly decided to quote GB score (which has always been low on Mac devices) and Shadow of the Tomb Raider (a non native game) when they compared it to RTX3090. They could have easily compared DaVinci Resolve and Baldur's Gate 3 performance and their power draws during those workloads and then they would be right to talk about the grapgh on the keynote being deceptive.

But, two things can be true at the same time: the M1 Ultra delivers impressive performance at low power, and Apple seems to have exaggerated its GPU performance.

When checking the graph they shared, they clearly state that M1 Ultra give more performance on the same power draw, or consumes less power at the same performance level. While the graph does not state which benchmark is that, many test we perform show that, they even shared conservative claims rather than exaggeration.

You're being disingenuous with your numbers there. Apple's spec sheets say the Studio uses 370 watts max. A stock non overclocked 3090 uses about 350 watts and if you pair it even with a power hungry 12900K which draws 272w or 5950x which only draws 140w you're not going to see an eightfold increase in power usage vs the studio. In other words a 3090 is twice as fast with using twice the power.

The spec sheet says "Maximum Continuous Power" which is the consistent power the PSU can provide considering there could be many devices attached to the Mac Studio via all the ports drawing power from the device.

People are reading the plot as though the vertical axis is the important one. Usually that is the case, but as horizontal dashed line labelled '200W less power' clearly indicates, Apple is focusing on the power consumption on the horizontal axis, nor overall computational performance. Those calling for the Apple's marketing team to be fired over this didn't really pay adequate attention to the slide. Nothing in the slide states that the GPU couldn't draw more power.

Exactly, this! Finally someone who can read a graph... They clearly state that M1 Ultra give more performance on the same power draw, or consumes less power at the same performance level. That graph does not say M1 Ultra reaches to a higher absolute performance level that RTX3090.
 
When companies lie about benchmarks, consumers stop believing them. Case-in-point, when Apple initially began discussing the M1's performance, I didn't believe it at all. When third-party reviews more-or-less confirmed Apple's claims, I went "huh, I guess I should take Apple's claims more seriously next time."

Now I'm back to basically ignoring Apple's benchmarks. And I think it's a particular shame, as there are so many other impressive metrics they could have touted. Oh well.

They clearly state that M1 Ultra give more performance on the same power draw, or consumes less power at the same performance level. Not only is it not a lie, but also it is quite expected given how M1, M1 Pro and M1 Max compare to Intel/AMD/Nvidia. That graph does not say M1 Ultra reaches to a higher absolute maximum performance level than RTX3090 does.
 
Last edited:
This is particularly relevant for desktop class computing. Most people are not concerned with power consumption for non-portable devices. As expected, it is going to take YEARS for Apple to truly compete. And even then, I won't be surprised if eventually Apple caves and has dedicated graphics options.
You think Apple is doing SoCs as some sort of vanity thing? Yeah, try again.
 
RTX3090 energy cost per year including host machine: £981/year

M1 Ultra energy cost: £148/year

Even if it’s half the speed that’s a win.
I question your math.

I pay, at the current exchange rate, £946 per year in electricity for a 2200 square foot house in Texas, including the 3090.
 
Are you writing your game on the Mac, for PC?
It was said we can’t compare windows and Mac performance. Why not? My video editing is insanely better on these new macs than my windows pc. Why can’t we compare them?
 
Raw performance isn’t anything. Real world performance does. How well apps optimize. d2d does a great job at it. So does the verge. Premiere, Resolve, AE, FCP, Also many apps that are being optimized for metal are in their infancy like Blender.
Then marketing should stick with that message, that the Ultra, on balance, is far more effective a platform for the use cases that it is designed for. Dissemble the truth and all you create is a problem with believability and trustworthiness. Apple should get out ahead of this a admit this unforced error.
 
It was said we can’t compare windows and Mac performance. Why not? My video editing is insanely better on these new macs than my windows pc. Why can’t we compare them?
Different tool for different job? I guess I am baffled that you are using a Mac to make a windows game.
 
$1300 laptop with 70W Nvidia 3060 is twice as fast as the maxed out M1 Ultra.

Blender BMW
16.39s - Nvidia 3060 70W mobile (GPU OptiX Blender 3.0)
20.57s - AMD 6900xt (GPU HIP Blender 3.0)
29s - 2070 Super (GPU OptiX)
30s - AMD 6800 (GPU HIP Blender 3.1)
34s - M1 Ultra 20CPU 64GPU (GPU Metal Blender 3.1)
37s - M1 Ultra 20CPU 48GPU (GPU Metal Blender 3.1)
42.79s - M1 Max 32GPU (GPU Metal Blender 3.1 alpha)
48s - M1 Max 24GPU (GPU Metal Blender 3.1 alpha + patch)
51s - Nvidia 2070 Super (GPU CUDA)
1m18.34s - M1 Pro 16GPU (GPU Metal Blender 3.1 alpha + patch)
1m35.21s - AMD 5950X (CPU Blender 3.0)
1m43s - M1 Ultra 20CPU 64GPU (CPU Blender 3.1)
1m50s - M1 Ultra 20CPU 48GPU (CPU Blender 3.1)
2m0.04s - Mac Mini M1 (GPU Metal Blender 3.1 alpha + patch)
2m48.03s - MBA M1 7GPU (GPU Metal Blender 3.1 alpha)
3m55.81s - AMD 5800H base clock no-boost and no-PBO overclock (CPU Blender 3.0)
4m11s - M1 Pro (CPU Blender 3.1 alpha)
5m51.06s - MBA M1 (CPU Blender 3.0)

Power consumption isn't that great either compared to laptop.
 
  • Like
Reactions: falainber
Then marketing should stick with that message, that the Ultra, on balance, is far more effective a platform for the use cases that it is designed for.

You want Apple to put out a message that a product of theirs does the things it was meant to do?

 
It was definitely sketchy the way Apple worded that; it's just not a good look. Only persons somewhat technically-inclined realized what Apple was actually saying with that - to everyone else it's simply going to look like false-advertising. The good news being, that even at only 1/2 as fast as last-gen GPUs, and ultimately less than 1/3 the performance of a current top-tier GPU, that's still a serious accomplishment for Apple. That's also still plenty of performance to provide excellent gaming experiences (it is NOT, however, good enough to provide anything resembling a good VR experience.)
 
  • Like
Reactions: fisherman188
Who the **** cares about comparing 100 watts to 320 and why should that actually make people happy? Does that mean my Renault Clio is as fast as a Lamborghini Aventador at 60mph or what? :D
I wasn’t clear, it’s not comparing wattage vs wattage, or performance vs performance , the chart is comparing the M1’s wattage vs performanc and 3090’s wattage vs performance. It shows the M1 producing more performance at 100w than the 3090 at 320w. It would be like showing your Clio hitting a 219mph using only 39.5mpg, and the Aventador only maxing out at 217mph using 9mpg.
 
  • Like
Reactions: neuropsychguy
$1300 laptop with 70W Nvidia 3060 is twice as fast as the maxed out M1 Ultra.

Blender BMW
16.39s - Nvidia 3060 70W mobile (GPU OptiX Blender 3.0)
20.57s - AMD 6900xt (GPU HIP Blender 3.0)
29s - 2070 Super (GPU OptiX)
30s - AMD 6800 (GPU HIP Blender 3.1)
34s - M1 Ultra 20CPU 64GPU (GPU Metal Blender 3.1)
37s - M1 Ultra 20CPU 48GPU (GPU Metal Blender 3.1)
42.79s - M1 Max 32GPU (GPU Metal Blender 3.1 alpha)
48s - M1 Max 24GPU (GPU Metal Blender 3.1 alpha + patch)
51s - Nvidia 2070 Super (GPU CUDA)
1m18.34s - M1 Pro 16GPU (GPU Metal Blender 3.1 alpha + patch)
1m35.21s - AMD 5950X (CPU Blender 3.0)
1m43s - M1 Ultra 20CPU 64GPU (CPU Blender 3.1)
1m50s - M1 Ultra 20CPU 48GPU (CPU Blender 3.1)
2m0.04s - Mac Mini M1 (GPU Metal Blender 3.1 alpha + patch)
2m48.03s - MBA M1 7GPU (GPU Metal Blender 3.1 alpha)
3m55.81s - AMD 5800H base clock no-boost and no-PBO overclock (CPU Blender 3.0)
4m11s - M1 Pro (CPU Blender 3.1 alpha)
5m51.06s - MBA M1 (CPU Blender 3.0)

Power consumption isn't that great either compared to laptop.


blender doesn't support direct metal gpu rendering. it still goes from open cl to metal.
 
Blender has officially had Metal rendering for Cycles for only a couple of weeks.

I'd expect it to get more performant as it's tuned.
 
Nvidia and Intel are already in the crisis room meetings stage of the game. Apple is competing and beating Intels highest end desktop CPU. A 12th gen effort vs version 1 from Apple. Nvidia is sweating as well. Nvidia is looking at the Ultra and wondering if they might have to up the power for their next gen 4090 to 800w To be competitive with Apple silicon.

The people who aren’t looking closely have mostly dismissed Apple‘s performance per watt goals. And that was and has been the whole ball game. The reason for this obsession focus was always tied to scaling.

All BS you say…?

Try running a 12900k+3090 on 200w…
Are they really sweating?

Is there enough cross platform GPU intensive software that might make Nvidia and Intel and AMD users switch platforms?

I truly don't know.

I'd think not, but I don't use this stuff to pay my mortgage, so ¯\_(ツ)_/¯
 
Blender Devs definitely need to get some tweaking on the Metal rendering pipeline. Attached is my BMW score with a 16-core Mac Pro w/W6800X Duo. This card is much better against nVidia in Octane.

13.74 Seconds in case you can't see that small. (3090 does it in around 10-11 seconds) (13.56 on another pass)

CPU only is around 1:50.55

Using both actually slows it down to 14.98 seconds...lol.

(using 1 GPU core, does it in around 23.68 seconds)

So based off that the M1 Ultra is actually pretty damn impressive for an integrated part; but not exactly as fast as Apple claims.
 

Attachments

  • Screen Shot 2022-03-18 at 3.06.36 PM.png
    Screen Shot 2022-03-18 at 3.06.36 PM.png
    920.3 KB · Views: 56
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.