Apple used a performance per watt chart. It’s clearly labeled. imagine paying someone to paint your house and instead he painted your car…would you be ok with that…?
I, too, enjoy using entirely irrelevant analogies.
Apple used a performance per watt chart. It’s clearly labeled. imagine paying someone to paint your house and instead he painted your car…would you be ok with that…?
Why wait for more information when people can speculate and complain based on incomplete information now?Isn't it still too early to judge?
I don’t get any sense of an attempt to ‘lie’ here. Some people pretty carelessly throwing around that word. Maybe you should take a moment to really examine what that chart does and doesn’t say.long-time Apple customers would know this chart is BS at first glance. When the advantage is much more clear they give you the better charts and graphs (and the multipliers of course). this one is designed to convey an idea without outright lying.
I don’t get any sense of an attempt to ‘lie’ here. Some people pretty carelessly throwing around that word. Maybe you should take a moment to really examine what that chart does and doesn’t say.
Well, I agree that Apple were not lying. Which is to say they were presenting data honestly, if perhaps incompletely by not showing the full performance range of the GPU.long-time Apple customers would know this chart is BS at first glance. When the advantage is much more clear they give you the better charts and graphs (and the multipliers of course). this one is designed to convey an idea without outright lying.
I agree in terms of M1 for dev work. I actually think the flexibility of the Mac platform (I've been able to run Mac OS, unix, windows etc.. on 1 machine for years now) has been a huge selling point for devs.I can appreciate this position for sure, but our teams are familiar with the three main OSes for the sake of development. A lot of our app work is moving to PWAs as interest is accelerating in this area and will likely continue to do so for a number of years.
I think you'd be surprised how many software development teams look at the M1 series and realise that though there are more powerful options out there and likely our workflows could benefit from the speed bump, but the overall package especially with the new gen MBPs...quite simply there's nothing like it on the market at the moment especially in this era of mobile working.
The fact that I can pop into the office and plug in, or stay home and get the same performance everywhere I am running on battery and still do better than the desktops we replaced them with is frankly astounding. We were running 3070s in our machines with 10th gen Intel CPUs and we're receiving on-par or better performance across the board.
Granted the Mac Studio is supposed to be a desktop machine and I realise the argument kind of breaks down with power/performance a little bit, especaiily in the GPU department, but for offices running 10+ machines at 400 - 700W a piece without a use case for GPU-intensive workloads...these machines would look mighty appealing!
And because they chose the proper wording it wasn’t disingenuous.I said without lying. Apple is trying to say here that the M1 Ultra has a better GPU than the RTX 3090 because it has better (or the same) relative performance using much less power. It's clear as day. It's not a lie - it's just disingenuous (choosing "relative performance" over just straight "performance").
When companies lie about benchmarks, consumers stop believing them. Case-in-point, when Apple initially began discussing the M1's performance, I didn't believe it at all. When third-party reviews more-or-less confirmed Apple's claims, I went "huh, I guess I should take Apple's claims more seriously next time."It’s only unclear if you think of Apple as some benevolent uncle.
If you think of them as a corporation looking to portray their product in the best light, it’s very clear.
Sure, but the whole point of Geekbench is to have workloads that are representative.
Here's what their "Compute" workloads do.
But, two things can be true at the same time: the M1 Ultra delivers impressive performance at low power, and Apple seems to have exaggerated its GPU performance.
You're being disingenuous with your numbers there. Apple's spec sheets say the Studio uses 370 watts max. A stock non overclocked 3090 uses about 350 watts and if you pair it even with a power hungry 12900K which draws 272w or 5950x which only draws 140w you're not going to see an eightfold increase in power usage vs the studio. In other words a 3090 is twice as fast with using twice the power.
People are reading the plot as though the vertical axis is the important one. Usually that is the case, but as horizontal dashed line labelled '200W less power' clearly indicates, Apple is focusing on the power consumption on the horizontal axis, nor overall computational performance. Those calling for the Apple's marketing team to be fired over this didn't really pay adequate attention to the slide. Nothing in the slide states that the GPU couldn't draw more power.
When companies lie about benchmarks, consumers stop believing them. Case-in-point, when Apple initially began discussing the M1's performance, I didn't believe it at all. When third-party reviews more-or-less confirmed Apple's claims, I went "huh, I guess I should take Apple's claims more seriously next time."
Now I'm back to basically ignoring Apple's benchmarks. And I think it's a particular shame, as there are so many other impressive metrics they could have touted. Oh well.
You think Apple is doing SoCs as some sort of vanity thing? Yeah, try again.This is particularly relevant for desktop class computing. Most people are not concerned with power consumption for non-portable devices. As expected, it is going to take YEARS for Apple to truly compete. And even then, I won't be surprised if eventually Apple caves and has dedicated graphics options.
I question your math.RTX3090 energy cost per year including host machine: £981/year
M1 Ultra energy cost: £148/year
Even if it’s half the speed that’s a win.
It was said we can’t compare windows and Mac performance. Why not? My video editing is insanely better on these new macs than my windows pc. Why can’t we compare them?Are you writing your game on the Mac, for PC?
Then marketing should stick with that message, that the Ultra, on balance, is far more effective a platform for the use cases that it is designed for. Dissemble the truth and all you create is a problem with believability and trustworthiness. Apple should get out ahead of this a admit this unforced error.Raw performance isn’t anything. Real world performance does. How well apps optimize. d2d does a great job at it. So does the verge. Premiere, Resolve, AE, FCP, Also many apps that are being optimized for metal are in their infancy like Blender.
Different tool for different job? I guess I am baffled that you are using a Mac to make a windows game.It was said we can’t compare windows and Mac performance. Why not? My video editing is insanely better on these new macs than my windows pc. Why can’t we compare them?
Then marketing should stick with that message, that the Ultra, on balance, is far more effective a platform for the use cases that it is designed for.
I wasn’t clear, it’s not comparing wattage vs wattage, or performance vs performance , the chart is comparing the M1’s wattage vs performanc and 3090’s wattage vs performance. It shows the M1 producing more performance at 100w than the 3090 at 320w. It would be like showing your Clio hitting a 219mph using only 39.5mpg, and the Aventador only maxing out at 217mph using 9mpg.Who the **** cares about comparing 100 watts to 320 and why should that actually make people happy? Does that mean my Renault Clio is as fast as a Lamborghini Aventador at 60mph or what?![]()
$1300 laptop with 70W Nvidia 3060 is twice as fast as the maxed out M1 Ultra.
Blender BMW
16.39s - Nvidia 3060 70W mobile (GPU OptiX Blender 3.0)
20.57s - AMD 6900xt (GPU HIP Blender 3.0)
29s - 2070 Super (GPU OptiX)
30s - AMD 6800 (GPU HIP Blender 3.1)
34s - M1 Ultra 20CPU 64GPU (GPU Metal Blender 3.1)
37s - M1 Ultra 20CPU 48GPU (GPU Metal Blender 3.1)
42.79s - M1 Max 32GPU (GPU Metal Blender 3.1 alpha)
48s - M1 Max 24GPU (GPU Metal Blender 3.1 alpha + patch)
51s - Nvidia 2070 Super (GPU CUDA)
1m18.34s - M1 Pro 16GPU (GPU Metal Blender 3.1 alpha + patch)
1m35.21s - AMD 5950X (CPU Blender 3.0)
1m43s - M1 Ultra 20CPU 64GPU (CPU Blender 3.1)
1m50s - M1 Ultra 20CPU 48GPU (CPU Blender 3.1)
2m0.04s - Mac Mini M1 (GPU Metal Blender 3.1 alpha + patch)
2m48.03s - MBA M1 7GPU (GPU Metal Blender 3.1 alpha)
3m55.81s - AMD 5800H base clock no-boost and no-PBO overclock (CPU Blender 3.0)
4m11s - M1 Pro (CPU Blender 3.1 alpha)
5m51.06s - MBA M1 (CPU Blender 3.0)
Power consumption isn't that great either compared to laptop.
blender doesn't support direct metal gpu rendering. it still goes from open cl to metal.
Are they really sweating?Nvidia and Intel are already in the crisis room meetings stage of the game. Apple is competing and beating Intels highest end desktop CPU. A 12th gen effort vs version 1 from Apple. Nvidia is sweating as well. Nvidia is looking at the Ultra and wondering if they might have to up the power for their next gen 4090 to 800w To be competitive with Apple silicon.
The people who aren’t looking closely have mostly dismissed Apple‘s performance per watt goals. And that was and has been the whole ball game. The reason for this obsession focus was always tied to scaling.
All BS you say…?
Try running a 12900k+3090 on 200w…