Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nvidia has been relatively quiet about their vehicle autonomy platform/stack recently. I am glad to see someone is using it, haha.
I think that's due to price. We're using it for research, not actual product development. I know that Volvo is using it for research + product development. Entry cost is high, requires two DGX system at about $500k each. There was talk a few years ago to offer a free version to run on any Nvidia hardware, but I'm not sure if that ever happened. They're pushing their own cloud services now, so maybe they're selling it as a service these days... I don't know. And truth be told, we probably could get away with CARLA now as they offer SimReady to import resources from Nvidia Omniverse (have not tried it). I'm not sure how accurate their sensor simulation is with more current versions as it's been a while since I've last used it. Everything we need, we run on-premise. While cloud services are great, it can be very expensive especially if people are not paying attention.
 
Deepseek R1 is already going head to head to head with Meta's Llama and Google Gemini, and to be honest outside of Nvidia's own website I have heard and read literally nothing regarding the use of Nvidia's models - most likely because they have nothing for the end user and developer markets. Regardless of whether you are running existing models or training new ones, even cards such as the 5090 will be seriously constrained with any model that uses more than 32GB RAM because the model will have to be partially offloaded to the system RAM. With the majority of GPUs in use having only 8-16GB of RAM, those constraints mean users would have to limit themselves to even smaller models. The only way to get GPUs with more than 32GB of RAM is to purchase the datacenter versions, which cost significantly more than most end users either can or will pay. That is why Nvidia will shoot themselves in the foot over the long run, as the effects of prioritizing datacenter parts and AI over the consumer market is already showing negative effects with respect to any sort of quality control for either the 4xxx or 5xxx RTX cards.
I think you’re missing the forest for the trees here.

NVidia’s AI offerings to datacenters aren’t them selling RTX 5090’s to big clusters, it’s bespoke hardware. Their issues with consumer graphics cards don’t carry over to their de facto main business.

Now in terms of consumers and small operations that can’t afford or don’t need tens of thousands of dollars in AI clusters, you have a point. But even then NVidia’s tools are so ubiquitous that even of the on paper specs for a Mac seem better, the tools themselves aren’t set up for them.
 
I think you’re missing the forest for the trees here.

NVidia’s AI offerings to datacenters aren’t them selling RTX 5090’s to big clusters, it’s bespoke hardware. Their issues with consumer graphics cards don’t carry over to their de facto main business.

Now in terms of consumers and small operations that can’t afford or don’t need tens of thousands of dollars in AI clusters, you have a point. But even then NVidia’s tools are so ubiquitous that even of the on paper specs for a Mac seem better, the tools themselves aren’t set up for them.

I wasn't referring to 5090s in datacenters - they don't have enough RAM onboard to be useful in those use cases. They're using HBM-based GPUs there anyways, because the memory bandwidth requirements are insane for what they want to do. And you missed my point - their focus on AI above all has resulted in the numerous issues with both 40- and 50- series RTX cards because they either can not or will not focus the necessary resources on that market segment to make sure that power connectors will not melt, the cards ship with the correct number of ROPs, or that driver updates don't break things as simple as GPU temperature monitoring. Nvidia even removed the hotspot sensor from the 50 series, which was a dumb move in terms of the overclocking and high performance gaming community.

With the resources and staffing Nvidia has, there is no excuse for the quality control and driver issues on the consumer side. Given that even their "redesigned" 12v power connector is still melting on 50-series cards, it's safe to say that they don't care enough to actually fix things.
 
  • Like
Reactions: MRMSFC
Shame it’s an M3 instead of an M4, but interesting nonetheless. The thing that stood out the most is the awful setup times on the Windows laptop. What is taking so long?

It's because Microsoft still can't make a setup process that takes less than five minutes to complete, and they keep trying to get you to use OneDrive, 365, Copilot, and 20 other things that just slow down the machine.
 
Would love to use Macs or amd cards (or tenstorrent or whatever) for ML dev since nVidia clearly uses their market position for making us pay... But, the reality is that, still, these other options are only viable from an idealist point of view. It just does not make sense yet to drop nVidia if you want to be sure to dev a product of some sort. The small company I work for recently bought an ugly h100 server with dual (really low perf) Xeons by HP for about 45000. Just because we need more mem than 24. I already could run the model (Computer vision) in my m3 max laptop but I have had problems across the ecosystem so I did't dare to invest in just a Mac Studio even though that would have cost us below 10K. Yeah CUDA + listening to devs was a smart move from nvidia back in the day. Already when apple developed OpenCL I was hoping for a breaking of that monopoly. But then even apple dropped OpenCL and when AI had its ressurgance around 2015 CUDA was still the only reasonable path forward. Metal is a good API for a lot of things and is a lot easier to work with than OpenCL for most people but being a locked down (and obj-C !) did not make it a candidate for anything outside of apples devices. Instead we got the capable but generally horrible Vulkan as the slow moving alternative for graphics and compute. Not even AMD themselves use it for their APIs for AI, instead they have this linux only CUDA simulacrum called ROCm/HIP.
Just to be clear, for most inference, Mac HW and SW is ok. It is the training part that is problematic.
What is the solution then?
1) make sure the big, currently used, AI frameworks actually works with Apple HW (I am looking at you pytocch with MPS backend!)
2) open up to the dev community in a much more open way. Treat pros and devs as pros and devs instead of idiot consumers. I want a Mac OS dist without the babysitting. Arch MacOS anyone?
3) if MLX is supposed to be the way forward: Accelerate the development. Open up for supporting other backends if possible
4) high perf HW that is within striking distance of nvidia, not a magnitude worse as now.
 
So what would the Mac Studio M4 Max be roughly equivalent to in terms of NVIDIA today?

RTX 5060?
 
So what would the Mac Studio M4 Max be roughly equivalent to in terms of NVIDIA today?

RTX 5060?
This is just a single test, but if you look at Blender scores, the 40-core M4 Max GPU, at ≈5,200, is about the same as the 4080 laptop GPU, and between the 5060 Ti desktop (4,600) and the 5070 Ti desktop (6,900).

Sources:
 
  • Like
Reactions: Thunderbird
Comparing the M4 Max to the RTX 5090 is absurd — they are in entirely different hardware categories. The M4 Max is a laptop GPU, while Nvidia doesn't offer the RTX 5090 for laptops. The so-called RTX 5090 Mobile is actually a cut-down RTX 5080, and in this case, it's hardly possible to even consider battery-powered operation due to its very high power consumption.
The M4 Max is used in the desktop Mac Studio, and this makes the computer's power consumption remarkably low relative to the performance it offers. As a result, it remains virtually silent in most cases, despite its very small enclosure.

To make comparisons with top-tier RTX cards, you'd have to look at the M3 Ultra — and rely on guesswork or a crystal ball when it comes to higher-tier M-series chips, since they haven’t been released yet.

M3 Ultra is above RTX 5070 and that means:
- M4 Ultra would be close to RTX 5080
- M5 Ultra should be close to RTX 4090 (between RTX 5080 and RTX 5090)
- Only a hypothetical M5 Extreme would be significantly faster than the RTX 5090
 
Comparing the M4 Max to the RTX 5090 is absurd — they are in entirely different hardware categories.
Agree.
The M4 Max is a laptop GPU
I would say that, as Apple uses it, the M4 Max is both a laptop and a desktop GPU. And the latter remains a problem for Apple.

As a laptop GPU, it's impressive—giving about the performance of a 4080 laptop GPU without the weight/heat/noise of a 4080 PC laptop.

As a desktop GPU in a relatively high-end (≈$2,700) desktop, it's somewhat weak for the category, since it appears closer in performance to a 5060 Ti than a 5070 Ti.
 
Last edited:
  • Like
Reactions: MRMSFC
As a desktop GPU in a relatively high-end (≈$2,700) desktop, it's somewhat weak for the category, since it appears closer in performance to a 5060 Ti than a 5070 Ti.

$2700 is the price of the RTX 5090 alone, not an entire PC system.

Right now, Apple has no hardware in the high-end market, since it's hard to consider the Mac Pro with the two-year-old M2 Ultra as such—especially given that in many cases it's outperformed by the M4 Max. That's why I mentioned the M3 Ultra, which, although aging, could at best be considered entry-level high-end hardware. Incidentally, 512 GB of unified memory—most of which can be used as high-speed GPU memory—offers capabilities no PC on the market can match, making the RTX 5090 seem outright laughable in that context.

It’s obvious that the high-end market is not profitable for Apple at the moment, but it won’t become so until Apple enters the server and data center market with hardware superior to that of Nvidia and AMD.
Sorry, Apple, but here you need to take a risk and make a bold decision.
 
Last edited:
$2700 is the price of the RTX 5090 alone, not an entire PC system.
You've misunderstood my post. I was talking about entire PC systems, and not about PC's with a 5090. I was saying that, for ≈$2,700, it would be nice if Apple offered performance comparable to a mid-range discrete GPU, like the 5070 Ti desktop found in ≈$2,500 PC's.

If we ignore the upcharges for RAM and storage, Apple has managed to become price-competitive for CPU performance, but they're not there yet for GPU performance.


1747637528263.png
 
  • Like
Reactions: thebart and MRMSFC
Ah, ok. Misunderstood.

Two comments, however:
- Apple hardware is not gaming hardware
- gaming PC hardware will always offer more ‘pure’ GPU power at a lower price, otherwise it would stop selling (I mean ‘semi-professional’ applications of course)
 
  • Like
Reactions: theorist9
Gaming hardware is hardware designed primarily for gaming.

You see, the reason I am confused is because Apple GPUs are designed both with gaming and professional applications in mind. Some other architectures need to make choices what to invest in, which is less of a problem for Apple. So it would appear that your definition misses the mark somewhat.
 
  • Like
Reactions: OptimusGrime
You've misunderstood my post. I was talking about entire PC systems, and not about PC's with a 5090. I was saying that, for ≈$2,700, it would be nice if Apple offered performance comparable to a mid-range discrete GPU, like the 5070 Ti desktop found in ≈$2,500 PC's.

If we ignore the upcharges for RAM and storage, Apple has managed to become price-competitive for CPU performance, but they're not there yet for GPU performance.


View attachment 2511894

There are other priorities than just raw power, though.

42cm x 20cm x 46cm, 15.37kg

vs

9.5cm x 19cm x 19cm, 3.64kg

Even in a desktop computer, not every user has the space or inclination to fit a massive, heavy, RGB LED laden tower case on a desktop
 
You see, the reason I am confused is because Apple GPUs are designed both with gaming and professional applications in mind. Some other architectures need to make choices what to invest in, which is less of a problem for Apple. So it would appear that your definition misses the mark somewhat.

There are other considerations for Apple, though, not just financial ones:

For example, premium gaming laptops tend to come with OLED screens, which have a premium image quality and top response times and are great for gaming. They’re a bad choice for productivity, though, because spending 8 hours a day with the same software UI onscreen, whether that’s XCode, Ableton, Final Cut Pro, etc, will burn in an OLED within a year.

And even in terms of raw CPU/GPU power, gamers tend to sacrifice everything for computing power, and are happy to have a massive box to house power hungry, hot components mostly designed to sit on the desk of a single 20something male.

Apple will tend to sacrifice raw power to enable the same CPU/GPU to sit within their phones, tablets, thin and light laptops, power user laptops and desktop computers, and in all cases balance computing power with portability and discreet enclosures. Their target market is a lot broader than the gaming market
 
  • Like
Reactions: cbum
You see, the reason I am confused is because Apple GPUs are designed both with gaming and professional applications in mind. Some other architectures need to make choices what to invest in, which is less of a problem for Apple. So it would appear that your definition misses the mark somewhat.
You should not be confused. 'Gaming hardware' refers to PC and not Mac.
 
I would like Apple to bring out a big iron gpu that would take the performance crown from Nvidia, but now I’m wondering what the point would be in doing that.

Even if on paper it handily beats the Nvidia Flagship in gaming, there’s many popular titles that aren’t even available on MacOS in the first place.

The M series already punches far above its weight in video editing, and unless someone requires the absolute most powerful GPU and is willing to spend hundreds or thousands on it, then a Mac is an extremely good value already.

In 3D modeling Apple is still a little behind, but as above basically only behind the bleeding edge.

Maybe I’m missing something, but what is there to gain by taking the performance crown for Apple or even us as consumers?
 
That is exactly it, @OptimusGrime .

Anyone who doubts that it's not about the hardware has to account for Nintendo somehow. Their consoles over the past 20 years or so always have trailing edge hardware, both CPU and GPU. You don't need the fastest, most expensive, and most power hungry GPU in the world to do well in gaming.
 
That is exactly it, @OptimusGrime .

Anyone who doubts that it's not about the hardware has to account for Nintendo somehow. Their consoles over the past 20 years or so always have trailing edge hardware, both CPU and GPU. You don't need the fastest, most expensive, and most power hungry GPU in the world to do well in gaming.
That almost sounds like an endorsement for Apple to make their own games (and have their own IP).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.