Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
RTX 3090 is on a garbage 8nm Samsung node that was used due to TSMC chip shortages.

The 8nm is an extension of 10nm, per Samsung's own words.

RTX 4000 will be on TSMC 5nm.

If the newest leaks are accurate. Nvidia has absolutely nothing to worry about. The 4000 RTX will smoke anything that Apple has coming down the pipeline for a while.

This isn't even mentioning that Nvidia has it's own chiplet based GPUs likely coming on the RTX 5000 series which have been on the roadmap for a long time now. Likely well into it's design phases.

Nvidia isn't Intel or AMD (in the GPU space). They know how to be agile and respond to market competition.
 
RTX 3090 is on a garbage 8nm Samsung node that was used due to TSMC chip shortages.

The 8nm is an extension of 10nm, per Samsung's own words.

RTX 4000 will be on TSMC 5nm.

If the newest leaks are accurate. Nvidia has absolutely nothing to worry about. The 4000 RTX will smoke anything that Apple has coming down the pipeline for a while.

This isn't even mentioning that Nvidia has it's own chiplet based GPUs likely coming on the RTX 5000 series which have been on the roadmap for a long time now. Likely well into it's design phases.

Nvidia isn't Intel or AMD (in the GPU space). They know how to be agile and respond to market competition.
Although true, nVidia is going crazy with the power requirements. Everyone else seems to be trying to get more performance out of less power except nVidia. Although the die shrink should help.
 
RTX 3090 is on a garbage 8nm Samsung node that was used due to TSMC chip shortages.

The 8nm is an extension of 10nm, per Samsung's own words.

RTX 4000 will be on TSMC 5nm.

If the newest leaks are accurate. Nvidia has absolutely nothing to worry about. The 4000 RTX will smoke anything that Apple has coming down the pipeline for a while.

This isn't even mentioning that Nvidia has it's own chiplet based GPUs likely coming on the RTX 5000 series which have been on the roadmap for a long time now. Likely well into it's design phases.

Nvidia isn't Intel or AMD (in the GPU space). They know how to be agile and respond to market competition.
The biggest problem is with TSMC. Nvidia , AMD, and even Intel’s GPUs will all be made on their 5nm process along with AMD’s next gen CPUs. And they’re all competing with Apple! I predict another huge shortage of chips.
 
Although true, nVidia is going crazy with the power requirements. Everyone else seems to be trying to get more performance out of less power except nVidia. Although the die shrink should help.

That's actually false. Nvidia tend to double performance from one generation to next so 3080 performance is twice the performance of 2080. If 500W 4090 is the same performance as two 3090 at 350W each (700W total) then it's 29% more efficient. Problem is you're dwelling too much on 4090 when other models like 4080, 4070, 4060, etc. will exist at lower wattage. 70W mobile 3060 is already twice as fast as maxed out M1 Ultra 64GPU at 140W+ so 4060 will be faster and more efficient than Apple's next several generations.
 
Last edited:
Well then, it proves that Apple Silicon chip isn't that powerful and not even versatile like PC. The problem is only a few software are being optimized and getting supports which means most software wont be able to get full potential of Apple Silicon. Since even some benchmark doesn't work well on Apple Silicon, I'm kinda doubt about Apple Silicon performance.

What are you gonna say when 5nm based GPU become available which can be compared with Apple Silicon? Apple is using 1~2 years old tech for Mac so why not match with iPhone chips?
 
  • Like
Reactions: George Dawes
Although true, nVidia is going crazy with the power requirements. Everyone else seems to be trying to get more performance out of less power except nVidia. Although the die shrink should help.
To be fair Ampere has close to a 2x better perf/watt than Turing. Well, if you use Nvidia's logic anyway.

My guess is that they know quite well that gamers buying $1K+ GPUs for gaming OR creatives buying $2K+ prosumer cards don't really care about perf/watt at the end of the day. They just want to push as many frames as possible or render their project as fast as possible.

Don't get me wrong--perf/watt IS important, but in the mobile space. Smartphones, laptops, tablets, etc....

In a set environment where you are plugged in directly to a wall outlet? I would hazard to guess that the amount that actually care is extremely low.

If people are that worried about their light bills. Then they wouldn't be in the market for such a GPU in the first place.
 
To be fair Ampere has close to a 2x better perf/watt than Turing. Well, if you use Nvidia's logic anyway.

My guess is that they know quite well that gamers buying $1K+ GPUs for gaming OR creatives buying $2K+ prosumer cards don't really care about perf/watt at the end of the day. They just want to push as many frames as possible or render their project as fast as possible.

Don't get me wrong--perf/watt IS important, but in the mobile space. Smartphones, laptops, tablets, etc....

In a set environment where you are plugged in directly to a wall outlet? I would hazard to guess that the amount that actually care is extremely low.

If people are that worried about their light bills. Then they wouldn't be in the market for such a GPU in the first place.
Yes and yes. Although if the next gen M2, using a quad UltraFusion design is able to match the latest from AMD and nVidia at half the wattage, that is going to be something to see. Although who is to say that Apple is not going to release a PCI based GPU, if they keep PCI slots for the next MacPro. Interesting times ahead.
 
To be fair Ampere has close to a 2x better perf/watt than Turing. Well, if you use Nvidia's logic anyway.

My guess is that they know quite well that gamers buying $1K+ GPUs for gaming OR creatives buying $2K+ prosumer cards don't really care about perf/watt at the end of the day. They just want to push as many frames as possible or render their project as fast as possible.

Don't get me wrong--perf/watt IS important, but in the mobile space. Smartphones, laptops, tablets, etc....

In a set environment where you are plugged in directly to a wall outlet? I would hazard to guess that the amount that actually care is extremely low.

If people are that worried about their light bills. Then they wouldn't be in the market for such a GPU in the first place.
I dont think people care about performance by watt on desktop. They only care about the performance. That's all.
 
  • Like
Reactions: lysingur
Are they really sweating?

Is there enough cross platform GPU intensive software that might make Nvidia and Intel and AMD users switch platforms?

I truly don't know.

I'd think not, but I don't use this stuff to pay my mortgage, so ¯\_(ツ)_/¯
Absolutely sweating bullets. The biggest number of sales for Nvidia aren’t the 3090. Look how well Apple is doing against the mass market 3060 and 3070 In laptops and desktops. Nvidia mainly sell GPUs and any market share they lose can’t be offset with profits from iPhones, AirPods or watches.
 
Different tool for different job? I guess I am baffled that you are using a Mac to make a windows game.
Where did I say this? That is not the only thing I do, I also record educational videos for software development and lets play videos. Sometimes the long series are 10 hour of video which the Mac cuts down the export time drastically.
 
So "M1 Ultra Doesn't Beat Out Nvidia's RTX 3090 GPU Despite Apple's Charts" isn't an accurate headline. It does exactly what the Apple Chart suggests, ramp up the power beyond what the chart states and yes, it beats the M1.
 
There's nothing new about unified memory. PS5 and Xbox Series X do it but they do it properly with GDDR instead of DDR.
Unified Architecture, not memory. Didn’t say is new. Having “everything” in same die concept. Increases flexibility in approaching optimisation and acceleration of computing processing. Instead of “Hardware components” architecture which will always be less efficient in both computational and energy power, bottleneck by slower and longer data paths.
 
Nvidia and Intel are already in the crisis room meetings stage of the game. Apple is competing and beating Intels highest end desktop CPU. A 12th gen effort vs version 1 from Apple. Nvidia is sweating as well. Nvidia is looking at the Ultra and wondering if they might have to up the power for their next gen 4090 to 800w To be competitive with Apple silicon.

The people who aren’t looking closely have mostly dismissed Apple‘s performance per watt goals. And that was and has been the whole ball game. The reason for this obsession focus was always tied to scaling.

All BS you say…?

Try running a 12900k+3090 on 200w…
Even more impressive as Apple is not on server business…
 
I dont think people care about performance by watt on desktop. They only care about the performance. That's all.
I think enough people care to the point where it is definitely a nice thing to have, I don't know how much eco considerations weight on the minds of people who care about that sort of thing are but it's not just power consumption.
The lower power draw also means a cooler chip, lower fan speeds, and a quieter computer. You don't really want the sound of your computer blowing away in the background if you are mixing sound.
It allows the packaging to be more compact and elegant.
If you remember back to the reason they switched to Intel they also put a big emphasis on performance per watt vs Power PC, even with the Mac Pro when that made the switch.
 
Forget about the performance for a sec. A PC with RTX 3090 can do practically anything while a Mac with M1 Ultra can do… Geekbench. 😂

Nvidia also has the 4000 series coming soon using TSMC 5nm like Apple. It’s over. Apple should stick to iOS where they truly have the advantage in software and hardware.
You might be missing the point: Game has just started. This is AS first batch…Nvidia pathetic attempt to buy ARM may give you a hint at upcoming future. Watch out for AMD too.
 
As a CG artist I am bummed. I was expecting more numbers.

I want to know how it performs in Blender Cycles while rendering a scene at 4096 samples, how it performs in Houdini PyroFX, Fluids, particle * other Rigid body, soft body simulation, procedural, Vellum? Or some Nuke benchmarks in 2D & 3D viewport?

How about its rendering performing in Octane, Redshift, Vray.

Is it on par with Nvidia Optix?

What about simulations or parametric modeling on Solidworks? How about some numbers on Rhinoceros 7.0?

How about RFO Benchmark for Revit? (Though I am not a BIM user myself).
Revit does not even support MacOS. Seriously.
 
RTX3090 energy cost per year including host machine: £981/year

M1 Ultra energy cost: £148/year

Even if it’s half the speed that’s a win.

Power savings are for people who don't **actually** need the hardware to generate money.

But as we are talking about pros making ~£50-£100 an hour that gap amounts to 8-20 hours of their hourly salary. As soon they render or run hardware on full steam, that half the performance will leave them watching the beach ball, in case of the Ultra for twice as long. And most pros will have to run it for way more than 8-20 hours per year, so economically the ultra loses.
 
Power savings are for people who don't **actually** need the hardware to generate money.

But as we are talking about pros making ~£50-£100 an hour that gap amounts to 8-20 hours of their hourly salary. As soon they render or run hardware on full steam, that half the performance will leave them watching the beach ball, in case of the Ultra for twice as long. And most pros will have to run it for way more than 8-20 hours per year, so economically the ultra loses.
What are you on about?

We generate £50 mil a year off our ML platform.

It’s not just trendy creatives rendering stuff that uses these things.
 


Despite Apple's claims and charts, the new M1 Ultra chip is not able to outperform Nvidia's RTX 3090 in terms of raw GPU performance, according to benchmark testing performed by The Verge.

m1-ultra-performance-chart.jpg

When the M1 Ultra was introduced, Apple shared a chart that had the new chip winning out over the "highest-end discrete GPU" in "relative performance," without details on what tests were run to achieve those results. Apple showed the M1 Ultra beating the RTX 3090 at a certain power level, but Apple isn't sharing the whole picture with its limited graphic.

The Verge decided to pit the M1 Ultra against the Nvidia RTX 3090 using Geekbench 5 graphics tests, and unsurprisingly, it cannot match Nvidia's chip when that chip is run at full power. The Mac Studio beat out the 16-core Mac Pro, but performance was about half that of the RTX 3090.

m1-ultra-benchmark-the-verge.jpg
The M1 Ultra is otherwise impressive, and it is unclear why Apple focused on this particular benchmark as it is somewhat misleading to customers because it does not take into account the full range of Nvidia's chip.

apple-silicon-chips.jpg

Apple's M1 Ultra is essentially two M1 Max chips connected together, and as The Verge highlighted in its full Mac Studio review, Apple has managed to successfully get double the M1 Max performance out of the M1 Ultra, which is a notable feat that other chip makers cannot match.

Article Link: M1 Ultra Doesn't Beat Out Nvidia's RTX 3090 GPU Despite Apple's Charts
Not very surprising
 
This is particularly relevant for desktop class computing. Most people are not concerned with power consumption for non-portable devices. As expected, it is going to take YEARS for Apple to truly compete. And even then, I won't be surprised if eventually Apple caves and has dedicated graphics options.
A dedicated gpu card will already happen with the new MacPro. This will have a dedicated gpu card but will work in tandem with the embedded gpu on the cpu.
 
What are you on about?

We generate £50 mil a year off our ML platform.

It’s not just trendy creatives rendering stuff that uses these things.

For many it's a human's time that's the significant cost and not electricity. Saving £800 a year is not worth if you lose 1% of productivity of a person who is payed £80000 a year.

How much is your electricity bill for training?

Do you use workstations for training? Or are you comparing it to cloud ML? Wouldn't TPUs, dedicated accelerators be more fair to evaluate vs ?

How is ML on M1s anyways?

From what I collect Pytorch, keras, jax etc doesn't seem to run well (if at all) on M1 GPUs or neural engine, and even tensorflow metal seems to be full of bugs and slow.

Eventually it will get there if/when Apple releases and many models now are memory bound anyways where Apple Silicon shines ...
 
  • Like
Reactions: k4ever and Danfango
For many it's a human's time that's the significant cost and not electricity. Saving £800 a year is not worth if you lose 1% of productivity of a person who is payed £80000 a year.

How much is your electricity bill for training?

Do you use workstations for training? Or are you comparing it to cloud ML? Wouldn't TPUs, dedicated accelerators be more fair to evaluate vs ?

How is ML on M1s anyways?

From what I collect Pytorch, keras, jax etc doesn't seem to run well (if at all) on M1 GPUs or neural engine, and even tensorflow metal seems to be full of bugs and slow.

Eventually it will get there if/when Apple releases and many models now are memory bound anyways where Apple Silicon shines ...
Comparing to cloud (AWS).

The key problem we have is developers are using HP Z platforms at home which are extremely power hungry. Never ever underestimate the power of someone on £80k a year to whinge about a large electricity bill. When there’s a chorus you have a problem.

We have three people running models on M1 but I’m not sure what they’re using to do it at the moment. I think it was tensorflow-metal. It is still early days as you suggest but this may improve as more focus appears on it.

From my perspective, I’m mostly concerned about the absolute disparity of running this stuff on windows which is the corporate mandate. The friction and impedance mismatch between the target environment (Amazon Linux) is horrendous and costs more money than any other cost. If we can get a half decent environment on a native unix platform then life is going to be a lot easier and cheaper for a lot of people. Engineers don’t want to use windows. They want macOS or Linux. And they can’t have Linux unfortunately because of our security policy (don’t ask - I don’t control that).

These Mac studios are the first step in that direction. They really could knock the whole PC workstation dominance on the head.

And this is the killer. We can’t actually get new workstations today without a 3-6 month lead.
 
  • Like
Reactions: Coconut Bean
Apple's marketing team needs to be let go
They've always told little lies, twisted the truth, or used meaningless stats that weren't so easy to disprove in order to get overenthusiastic "technology" writers to give them good press. However, lately their lies have been outrageous and easy to disprove.

It took "courage" to remove the headphone jack in order to force your gullible customers into spending hundreds of dollars on crappy wireless earbuds. Everyone with half a brain cell knew this was just a money grab. Last years' M1 was supposed to beat a RTX 3080. It couldn't even beat a 3060. This year's M1 Ultra was supposed to beat a 3090...

The real problem isn't Apple; it's the people who want to be seen as cool and technology savvy (they're not either) so bad that they actually do some impressive (and embarrassing) mental gymnastics to justify getting got by Apple. Removing the fans from a computer is just plain dumb. Bluetooth doesn't sound better than wired (although it is convenient), MacOS and having a powerful ARM chip are worthless if they can't run the applications you need for work or school (or gaming), and power efficiency is worthless if it takes you 2-3 times as long to do the same task.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.