Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And the M2 is going to be on 4 nm process and significantly faster than M1 silicon, followed within 18 months by the M3 on 3 nm process. What's your point?
M1, M2 are irrelevant vs high end desktop GPUs. They are not even in the same universe.
Also 4nm is just a small upgrade ove 5nm, mostly in size(it's not even half a node improvement) while Nvidia's next generation GPUs will get a 2 full node upgrade.
 
First of all, the GPU cores in the Ultra are already 18 months old (debuted in the A14 in Fall of 2020)… This Fall Apple will release new cores that are two generations newer.

Apple won't release a new high end chip with new GPU cores anytime soon. The M1 Ultra is what's going to compete with the upcoming 4090 for the next +12 months.

The 3090 is a 360W GPU, The entire Ultra is just over 200W, with the 64-core GPU using 120W. And the newer GPU cores in the A15 are much more powerful and more efficient. I’m sure the next gen. GPU cores in the A16 will be even more so.

That's the max power consumption and at that power in 3d rendering, compute, ray tracing etc. the 3090 will most likely trash the M1 Ultra. Like I've said the 3090 is an old GPU built on an older 10nm node. Beating it's efficiency is not an achievement, beating it's performance is and apple concentrating on talking about efficiency and not showing concrete performance numbers.

And lastly… Apple has only been designing GPU’s for a few years now. The fact that they have been able to achieve what they have is astounding.
You act like apple's GPUs are 1on1 match in features and every performance metrics with Nvidia's GPUs. They aren't.
We'll see what Nvidia is going to be able to do on 5nm.
 
M1, M2 are irrelevant vs high end desktop GPUs. They are not even in the same universe.
Also 4nm is just a small upgrade ove 5nm, mostly in size(it's not even half a node improvement) while Nvidia's next generation GPUs will get a 2 full node upgrade.

Too bad nvidia doesn’t know how to to physical VLSI design properly.
 
This thing is 25% faster than Intel's 12900K, while only using 1/3rd of the power !!

honest question, why does power consumption matter on a desktop? Is it like will make companies spend 2/3 less on electricity bill?

I am also assuming , more electricity more speed, so why not turn up the dial on the M1 Ultra and make it consumer as much as intel chips and burn them in performance?
 
honest question, why does power consumption matter on a desktop?

Beyond just power consumption, there is also the extra heat that consumption generates and how it impacts other components within the machine or what kind of cooling is necessary (like liquid or phase-change) which have their own associated direct and indirect costs.

True, for a single user these costs don't really matter.

But if they are scaled across hundreds or even thousands of users in a corporate environment, then they do add up to something that is measurable and impactful.
 
  • Love
Reactions: colourfastt
honest question, why does power consumption matter on a desktop? Is it like will make companies spend 2/3 less on electricity bill?

I am also assuming , more electricity more speed, so why not turn up the dial on the M1 Ultra and make it consumer as much as intel chips and burn them in performance?
I think at some point, all computers will be mobile. So power consumption does matter when you are developing CPU's that will eventually go into a mobile computer.
 
I think at some point, all computers will be mobile. So power consumption does matter when you are developing CPU's that will eventually go into a mobile computer.
Matters even more for CPU’s that will go into data centers. The building has only so much cooling and can only bring in so many amps of current before you need to spend millions on a new building. We realized that back in the day when we started making server chips.
 
  • Like
Reactions: iBug2
Matters even more for CPU’s that will go into data centers. The building has only so much cooling and can only bring in so many amps of current before you need to spend millions on a new building. We realized that back in the day when we started making server chips.

But my understanding is that data centers want more power and no less power, they want the fastest chips over saving energy.

Side question, how much heat is generated by servers in a data center? My understanding is so long as there is AC in the rooms they should work just fine. In fact, computers work even outside in summer heat. Does a room actually get hot because of chips? they work as heaters?

You must have forgotten Jobs’s Quartz 2D Extreme.

I remember the name Quartz, I do not know what it does, and whatever it does the consumer did not have to get confused by it. I think it was something like Direct3D that renders graphics, only the knowledgable programmers have to deal with that.
 
But my understanding is that data centers want more power and no less power, they want the fastest chips over saving energy.

Depends on what the data center is used for.

If it is a super-computer data center doing high-end mathematical modeling (Top 500 class), then yes, they want power first and foremost.

But when you look at datacenter like Microsoft Azure or Amazon AWS or Google, they are not pushing the envelope of raw compute performance. Their primary workload is holding petabytes worth of data and those machines don't need top-end CPUs, just lots of RAM and racks of storage.

Side question, how much heat is generated by servers in a data center? My understanding is so long as there is AC in the rooms they should work just fine. In fact, computers work even outside in summer heat. Does a room actually get hot because of chips? they work as heaters?

They generate immense amounts of heat. And yes, AC can cool them, but they can only do so much.

And I have first-hand experience with massive data centers (thousands of servers) suffering thermal overloads during extraordinary heat-waves and having to power-down a significant amount of the hardware to keep the temperatures within acceptable operating temperatures.
 
  • Like
Reactions: MacBH928
But my understanding is that data centers want more power and no less power, they want the fastest chips over saving energy.

Side question, how much heat is generated by servers in a data center? My understanding is so long as there is AC in the rooms they should work just fine. In fact, computers work even outside in summer heat. Does a room actually get hot because of chips? they work as heaters?



I remember the name Quartz, I do not know what it does, and whatever it does the consumer did not have to get confused by it. I think it was something like Direct3D that renders graphics, only the knowledgable programmers have to deal with that.

Data centers need throughput, not compute power. There are lots of threads running, but people aren’t generally doing things like 3D rendering or scientific workloads using these things. The heat that is generated is immense. You need massive cooling systems to cool these places. The rooms really do get hot, and they don’t need heating in the winter.

When we designed Opteron, we learned that this was hugely important to the data center operators because they could only cool so many BTUs and only bring in so many KWh of electricity, so being able to get double the computer power in the same amount of heat saved them the cost of having to build an entire new building. So they’d rather pay us for a million dollars worth of chips than build a new building for $10 million (not to mention the cost of electricity being halved).
 
  • Like
Reactions: MacBH928
I remember the name Quartz, I do not know what it does, and whatever it does the consumer did not have to get confused by it. I think it was something like Direct3D that renders graphics, only the knowledgable programmers have to deal with that.

AirPort Extreme, then. Not to be confused with AirPort Express!

I don’t think Ultra is significantly worse a suffix than Extreme.
 
Too bad nvidia doesn’t know how to to physical VLSI design properly.
Yeah right.
Anyway, rumors about Nvidia's upcoming GPUs


Most likely something like a 4060Ti(so upper mid-range) will match the performance of the 3090, 4080 will double the performance and the 4090 who knows.
Nvidia in the GPU department is a monster and nothing will change that.
 
Yeah right.
Anyway, rumors about Nvidia's upcoming GPUs


Most likely something like a 4060Ti(so upper mid-range) will match the performance of the 3090, 4080 will double the performance and the 4090 who knows.
Nvidia in the GPU department is a monster and nothing will change that.

Which tells us what, exactly, about their ability to do physical design?

They perform automated place and route and logic synthesis. Give me a break,.
 
Which tells us what, exactly, about their ability to do physical design?

They perform automated place and route and logic synthesis. Give me a break,.
It tell us about their ability to make an overall competent GPU.
What's very clear is that Apple's GPU at most matches a 3090 in very limited scenarios.

This is where we are at right now.

The 3090 is over 2 times faster in compute. Also it's a way more feature packed GPU.
 
  • Like
Reactions: huge_apple_fangirl
It tell us about their ability to make an overall competent GPU.
What's very clear is that Apple's GPU at most matches a 3090 in very limited scenarios.

This is where we are at right now.

The 3090 is over 2 times faster in compute. Also it's a way more feature packed GPU.

Your response is totally off-topic and doesn’t address physical design at all. Do you know what VLSI physical design means?
 
Your response is totally off-topic and doesn’t address physical design at all. Do you know what VLSI physical design means?
There's nothing to address as there's not much to confirm that "nvidia doesn’t know how to to physical VLSI design properly".
Nvidia is an old dog in the industry and it was never like Intel which got lazy as soon as they distanced themselves from their competitors. And make no mistake, Apple is not competing with Nvidia by any stretch of the imagination.
I don't even like Nvidia as a company but their products are and always were industry leading in the GPU department and that's not going to change anytime soon.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.