Apple is falling behind, M4 Max nowhere close to RTX 5090.

To be fair, this is not a Mac vs. PC question — that's an Nvidia-owned platform. Can you use this platform with an AMD or Intel GPU for example?

For what it is worth, it seems that Omiverse platform heavily relies on Universal Scene Description, and Apple is one of the founding members of the Alliance for OpenUSD and support for USD is built into all Apple platforms. USD was also prominently featured in WWDC sessions.
"No one buys AMD/Intel GPU's" At least there aren't any recent workstation GPU's from those makers. Again shifting the conversation away from Gaming GPU's.

I am aware Apple is apart of the Alliance for OpenUSD. Was just pointing out they don't appear to have an API that allows you to do what Onmiverse is claiming you can do.

Which is fine, it was more or less literally answering the question posed (what can you do on a PC that you can't on a Mac). PC == system with Workstation GPU. No RGB bling glass case gaming stuff.
 
"No one buys AMD/Intel GPU's" At least there aren't any recent workstation GPU's from those makers. Again shifting the conversation away from Gaming GPU's.

I am aware Apple is apart of the Alliance for OpenUSD. Was just pointing out they don't appear to have an API that allows you to do what Onmiverse is claiming you can do.

Which is fine, it was more or less literally answering the question posed (what can you do on a PC that you can't on a Mac). PC == system with Workstation GPU. No RGB bling glass case gaming stuff.
While accurate, I feel like this misses the mark somewhat.

Primarily, Apple sells Macs to a “prosumer” demographic. Not necessarily a demographic that cares about workstation class hardware and software. They typically don’t compete in spaces that require large hardware contracts and huge volumes, where you would typically find that support is crucial.

For Apple’s intended audience, the answer to “What can you do with a PC that you can’t do with a Mac?” is “play my favorite games”. And unless they address that, the market for Macs is unlikely to expand.

As an aside, I will say that the switch to the M series has improved video editing performance significantly, at least going by the lack of threads about how far ahead NVidia is in video editing. At least there’s improvement in some areas.
 
"No one buys AMD/Intel GPU's" At least there aren't any recent workstation GPU's from those makers.
? AMD CDNA 3 GPUs launched at the end of 2023 and have been supposedly fairly successful in the market (maybe not quite compared to Nvidia, but still). CDNA 4 meanwhile is reportedly set to launch middle of this year with "CDNA Next" or perhaps UDNA launching 2026/2027. For Intel though it's true ... Intel's workstation GPU efforts have been delayed and though they did release Gaudi 3, it's been suggested that the lack of success/roadmap delays in workstation AI products like Gaudi/Falcon Shores was the reason CEO Pat Gelsigner was fired (this is not known).
 
? AMD CDNA 3 GPUs launched at the end of 2023 and have been supposedly fairly successful in the market (maybe not quite compared to Nvidia, but still). CDNA 4 meanwhile is reportedly set to launch middle of this year with "CDNA Next" or perhaps UDNA launching 2026/2027. For Intel though it's true ... Intel's workstation GPU efforts have been delayed and though they did release Gaudi 3, it's been suggested that the lack of success/roadmap delays in workstation AI products like Gaudi/Falcon Shores was the reason CEO Pat Gelsigner was fired (this is not known).
AMD offers CDNA in non-MI Accelerator format?
 
Aye, the MIs are the equivalent of Nvidia Hopper and Blackwell workstation cards (well Hopper anyway) ... when you say workstation, which Nvidia cards are you thinking of?
The Quadro cards that they don't call Quadro anymore. I think they are using RTX Pro branding now. Technically they are different dies than the Blackwell datacenter dies (GB200 series vs GB100) though it has never been made clear why.
 
The Quadro cards that they don't call Quadro anymore. I think they are using RTX Pro branding now. Technically they are different dies than the Blackwell datacenter dies (GB200 series vs GB100) though it has never been made clear why.
Just to make things more confusing Nvidia actually does call some of their Blackwell superchips GB200 and soon GB300, but I *think* the GB200 dies are still "Blackwell 1.0"/GB100 and there are simply two of them, but I'm not sure about the GB300s as those are called "Blackwell Ultra". "Spark" is GB10 but it isn't clear exactly which die that is. So they are using GB for both the die generation and some product names when that product combined the GPU die with a CPU. Fun.

Typically Nvidia splits the data center GPU by having it be AAx00 vs non-data center GPU AAx02-8 where AA is generation, x is the sub-generation, and the final number is the die subtype (though the non-data center GPUs can still have "Pro" or "Server" monikers attached). With Hopper, Nvidia previous data center GPU didn't even share the same generation as the "Ada" GPUs - why they do this time I think is due to the tensor cores being the same. However, "Blackwell 2.0" gets fewer of them and likely misses out on other features like interconnects and FP64 while gaining raster capabilities, which is probably why the die sub-generations are considered different though with Nvidia it is never entirely clear what is truly different at the hardware design level, hardware that has been physically fused off, controlled by firmware/drivers.

But back to the main discussion, yeah the RTX Pros are basically identical to their non-Pro cousins with differing core counts (mostly lower, sometimes higher) and memory (almost always higher, much much higher). The main difference in performance for those tasks which fits into the smaller memory pool is thus in the drivers. How, that affects Omniverse which bills itself as non-Pro RTX friendly is unclear to me and it is more than possible that Nvidia has tuned the drivers to make the Pro RTX cards outperform their non-Pro cousins on per TFLOP normalized basis here (and of course the additional VRAM likely helps for larger simulations). The data center Blackwell should perform much better with their higher matmul throughput.

AMD is supposedly their own line of RDNA 4 Pro GPUs starting in June/July, but like RDNA 4 in general won't be competing at the highest end and I think it has indeed been awhile since they released a non-datacenter/server "Pro" GPU. Though like with Desktop RTX Pros, while they are available to buy stand alone, my suspicion is a lot, if not most, will be sold as part of server configs anyway (as a side note: heck you can even find people selling individual PCIe H100s, my guess is reselling them after buying a server for the most part - one site even offers MI300s saying "ask for a quote").
 
Last edited:
Trying to compare S/C high fidelity macOS computers with mono Windows systems is a fools errand. Mac is not a PC and never will be.
I almost blew coffee through my nose lol - thought the SAME thing when reading it - and the 5090 costs more than the 20 core GPU Pro MBP! Strapped with less RAM unified than the 5090 has on board - as well, makes this an apples to peanut butter comparsion

OP - you’re aware that the M3 Ultra can be fitted with 512GB of RAM, right? Even 128 and you’re about the same price as a TRUE 5090 equipped Piece of Cr@p I mean PC - and the m3 Ultra will eat ANY Windows machine for actually making or creating or coding or using a computer other than as a console/gaming machine. Not to mention the necessary 1200 watt power supply needed to run a 5090 and current AMD/Intel CPU, motherboard with Thunderbolt 5 I/O vs. a machine with 6 - SIX separate TB5 I/O ports, 10GB Ethernet, SS USB A ports, SDXC in a package that weighs about the same as a 5090 card, but it’s also available! You can actually BUY one at MSRP, usually lower - and sip 10% the wattage of your 5090 to run the entire machine. Without a vacuum cleaner for audio (fan noise).

5090 is a damn nice card, the only 50 series worth the Y2Y upgrade with it smashing the 4090. Same cant be said of the 5080 or 5070… neither of which you can get either at MSRP.


Too funny dumastudetto - you nailed it. He’s benching a mid level MacBook Pro that costs less than a card, minus the rest of the peripherals; motherboard, CPU, SSD, RAM, case, cooling and PSU so you can actually use it - and a dedicated power supply. We’ve got 15 amp, 1800 watt receptacles in the US, and we’re slowly bumping up against that limit with x86!

Nvidia has big challenges ahead
 
My uninformed opinion on this is, that while Apple has done something fantist with the M series SoC, I don't believe it can hold up to specifically designed components like a GPU.

In all honesty, I don't think it will ever be able to compete to xx50 class GPUs from Nvidia, not because nvidia is better and GPU design, but because they have so much more flexibility and ability to craft a GPU. Apple is constrained to keeping things on a single piece of silicon.

Just look at how much vram the 5090 has, and of course the power requirements.
 
My uninformed opinion on this is, that while Apple has done something fantist with the M series SoC, I don't believe it can hold up to specifically designed components like a GPU.

In all honesty, I don't think it will ever be able to compete to xx50 class GPUs from Nvidia, not because nvidia is better and GPU design, but because they have so much more flexibility and ability to craft a GPU. Apple is constrained to keeping things on a single piece of silicon.

Just look at how much vram the 5090 has, and of course the power requirements.
Advanced packaging will make that constraint less relevant soon, possibly a week from now, when SoIC is rumored to be introduced. 2027 at the latest. Apple can do the kind of things Nvidia does when it uses advanced packaging to combine the Grace CPU (which is ARM-based) with Blackwell GPUs.

Apple is skating to where the puck will be tomorrow, not to where it is today. A card like the 5090 is the old way of doing things. Look at the new Nvidia DGX Station — it requires a separate GPU card for video (that is, to use a display), while the main Blackwell GPU is packaged on the main board.
 
My uninformed opinion on this is, that while Apple has done something fantist with the M series SoC, I don't believe it can hold up to specifically designed components like a GPU.

In all honesty, I don't think it will ever be able to compete to xx50 class GPUs from Nvidia, not because nvidia is better and GPU design, but because they have so much more flexibility and ability to craft a GPU. Apple is constrained to keeping things on a single piece of silicon.

Just look at how much vram the 5090 has, and of course the power requirements.

You are right that they are playing in entirely different weight categories. Still, there are ways for Apple to compensate. I agree with @tenthousandthings that advanced packaging is probably going to become more important going forward. Apple can afford to throw more money at the problem compared to others — as long as they spend less to make a chip than Nvidia charges for theirs, it's a net win for Apple.

In addition, Apple's GPU tech is developing at an incredible rate. The M4 Max (5120 shaders @ 1.58 Ghz) goes toe-to-toe with the mobile RTX 4080 (7424 shaders @1.66 Ghz) in Blender — that's roughly 50% better efficiency on complex workloads. And there is a lot of evidence that they are setting their architecture up for wider execution. I believe a realistic expectation is 50-100% more performance for around 30% larger die. Which incidentally pairs very well with 3D die stacking and similar techniques for improving effective logic die area.
 
You are right that they are playing in entirely different weight categories. Still, there are ways for Apple to compensate. I agree with @tenthousandthings that advanced packaging is probably going to become more important going forward. Apple can afford to throw more money at the problem compared to others — as long as they spend less to make a chip than Nvidia charges for theirs, it's a net win for Apple.

In addition, Apple's GPU tech is developing at an incredible rate. The M4 Max (5120 shaders @ 1.58 Ghz) goes toe-to-toe with the mobile RTX 4080 (7424 shaders @1.66 Ghz) in Blender — that's roughly 50% better efficiency on complex workloads. And there is a lot of evidence that they are setting their architecture up for wider execution. I believe a realistic expectation is 50-100% more performance for around 30% larger die. Which incidentally pairs very well with 3D die stacking and similar techniques for improving effective logic die area.
Apple also has the advantage of not having to build a GPU for a market that may or may not collapse on itself (AI) because that is where all their money comes from (Datacenter). And a process node (or two) advantage.

Are Apples shaders the same as Nvidias (is it okay to assume you were talking about ALU's)?
 
Are Apples shaders the same as Nvidias (is it okay to assume you were talking about ALU's)?

Yeah, as "shader" I meant a single unit of scalar FP computation. The companies advertise hardware capabilities very differently, so this is a way to make things comparable. The basic units are pretty much the same for all architectures — they can do a FMA/FADD/FMUL per cycle sustained. Latency, execution granularity, hazard treatment etc. can of course differ. But if all you want it say, multiply two arrays of element-wise, you'd get shaders*clock FLOPS (double this for multiply + add since it is executed as a single operation).
 
You really think its going to collapse?

Collapse, unlikely, however, things can change rapidly. Nvidia's approach to ML is not the most efficient one when it comes to moving data. Their business can potentially be disrupted by a competitor who can ship a more cost-efficient ML-specific accelerator. While Nvidia's big selling point is the strong software ecosystem and tooling, I'd argue this matter less for ML applications since from the programming standpoint they are not very complex systems. You don't need crazy programmability and sophisticated debug tools to implement a large transformer based model.
 
You really think its going to collapse?
maybe collapse is too strong of a word. it is clear they are affected by export controls on their GPU's, which is something Apple doesn't have to worry about (yet). models like deepseek should be a concern (assuming they really don't need all the compute "traditional" models do).

nvidia has spent time/resources supporting more data formats in hardware than Apple has (so far) to the detriment of generational improvements in raster/rt performance (imo)
 
nvidia has spent time/resources supporting more data formats in hardware than Apple has (so far) to the detriment of generational improvements in raster/rt performance (imo)

That is not quite accurate. Nvidia delivered plenty of interesting advances in the recent years. And I am not talking about the frame generation nonsense. They have advanced micro geometry support, displacement maps for raytracing, unified virtual memory, and work synchronization for complex compute.

At the same time it does seem like they’ve reached a plateau with the current architecture. It is certainly scalable and area-efficient, and it allowed Nvidia to stay on the top for many years, but they will probably need a radical new architecture for the next jump in performance.
 
Apple also has the advantage of not having to build a GPU for a market that may or may not collapse on itself (AI) because that is where all their money comes from (Datacenter). And a process node (or two) advantage.

Are Apples shaders the same as Nvidias (is it okay to assume you were talking about ALU's)?

I actually think Nvidia's biggest potential for collapse isn't on the AI side of things, but in the consumer market. There have been so many issues and mini-scandals related to the 50-series that Nvidia actually limited pre-release reviews of the RTX 5060 to those reviewers who agreed to only test according to Nvidia's guidelines. They didn't want reviewers knocking the cards for their 8GB VRAM (which isn't enough to even run some current titles), so they did all of the following (none of which Nvidia had done at any point in either the 40 or 50-series releases to date):

- Blocked most independent reviewers from even getting cards to review prior to launch. Those who did get cards to review had to agree to a strict set of testing criteria and what they could and could not say about the cards.
- Did not release any drivers for the new cards until day of launch.
- Forced AIB partners to not send cards to aforementioned reviewers.

All of that speaks volumes about the lack of confidence Nvidia has in their consumer product line. In Nvidia's defense, maybe they had reason to lack confidence given their issues with overheating, the redesigned 12v connector still causing cards to overheat and melt, missing ROPs (processing units) on the cards themselves, pricing that is disproportionate to the relative uptick in performance between generations, and trying to claim that frame generation equals real framerates.
 
I actually think Nvidia's biggest potential for collapse isn't on the AI side of things, but in the consumer market. There have been so many issues and mini-scandals related to the 50-series that Nvidia actually limited pre-release reviews of the RTX 5060 to those reviewers who agreed to only test according to Nvidia's guidelines. They didn't want reviewers knocking the cards for their 8GB VRAM (which isn't enough to even run some current titles), so they did all of the following (none of which Nvidia had done at any point in either the 40 or 50-series releases to date):

- Blocked most independent reviewers from even getting cards to review prior to launch. Those who did get cards to review had to agree to a strict set of testing criteria and what they could and could not say about the cards.
- Did not release any drivers for the new cards until day of launch.
- Forced AIB partners to not send cards to aforementioned reviewers.

All of that speaks volumes about the lack of confidence Nvidia has in their consumer product line. In Nvidia's defense, maybe they had reason to lack confidence given their issues with overheating, the redesigned 12v connector still causing cards to overheat and melt, missing ROPs (processing units) on the cards themselves, pricing that is disproportionate to the relative uptick in performance between generations, and trying to claim that frame generation equals real framerates.
The consumer products don't matter (at least as far as revenue/profit is concerned). I actually think the internet has overblown some of the issues Nvidia has had (especially from the driver side).
 
I actually think Nvidia's biggest potential for collapse isn't on the AI side of things, but in the consumer market. There have been so many issues and mini-scandals related to the 50-series that Nvidia actually limited pre-release reviews of the RTX 5060 to those reviewers who agreed to only test according to Nvidia's guidelines. They didn't want reviewers knocking the cards for their 8GB VRAM (which isn't enough to even run some current titles), so they did all of the following (none of which Nvidia had done at any point in either the 40 or 50-series releases to date):
The consumer/gaming segment is tiny, compared to Its Data center segment. If the gaming collapsed they'd not bat an eyelash.
1748952147005.png
 
The consumer/gaming segment is tiny, compared to Its Data center segment. If the gaming collapsed they'd not bat an eyelash.
View attachment 2515991
That chart is unreal. Gaming was actually the most important segment until the LLM takeoff. I remember attending a course on parallel computing in grad school some 13y ago with GPU computing still in its early stages, the instructor (an NVDIA engineer) remarked that "mainstream gaming was driving the market and chip design", that the CUDA side of things was a side bet that wouldn't be possible without gaming card sales fueling the company. Then 2y later the deep learning boom happened, and now LLMs/GenAI brought us to the point where tables have completely turned...
 
Gaming was actually the most important segment until the LLM takeoff.
Was, is the keyword. Now its not, so how does that make the chart unreal. Its actually what nvidia has been reporting and yes, they're making insane amounts of money on AI. So much so, there's been news reports that Nvidia is curtailing GPU manufacturing to divert resources to their Data center business unit
 
Was, is the keyword. Now its not, so how does that make the chart unreal. Its actually what nvidia has been reporting and yes, they're making insane amounts of money on AI. So much so, there's been news reports that Nvidia is curtailing GPU manufacturing to divert resources to their Data center business unit
In American slang, “unreal” means something like “amazing” (similar to “incredible”) — nobody is disputing the chart
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top