Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If Apple were to present a new Pro XDR display with Thunderbolt 5. As it now has products that support that. Would AMD cards include Thunderbolt 5? And would the Mac Pro 7,1 be able to support that?

No support without some substantive kludges. There is no x4 PCI-e v4 on the 7,1 and pragmatically it is a required input to a discrete TBv5 chipset solution. The 7,1 launched with a PCI-e backhaul that wasn't positioned to 'last ten years'. There is two x4 PCI-e v3 feeds via MPX but that pragmatically isn't a single x8 PCI-e v3 . They could kludge a x4 v4 with a x8 v3 , but need to 'steal' that from another slot (which is another dangling internal cable. Apple is highly unlikely to do that).

It is kind of quirky because DisplayPort 2.x is using some of the foundational transport mechanism of Thundert, just different protocols layered on top and all high speed data outbound. The 'easier' workaround if Apple wanted to bother is to just have the XDR take Display 2.x as an input (and let some other parts drift).


The larger kludge solution would be a standard AMD gpu ( no MPX connector) and some Thunderbolt adapter card that soaked up the x8 slot ( and uplift switch to generate some x4 PCI-e v4 provisioning) and loop back DPv2 cable from card to card. Apple wouldn't do it, but maybe some third party would if had some basic driver support.
( these 'Frankenstein' TB with external loopback solution never had deep, significant traction on the general PC market. Even less likely to work on the Apple subset. )


If not, there will be no reason why Apple would bother offering a new generation AMD cards. At least as far as I can guess.

The T2 and the Intel CPU are also huge factors as to why no huge effort here. Myopically just looking at the GPU misses the forest for a tree. macOS on Intel is getting 'caretaker' work put in.

The final huge missing piece here is the 'attachment' rate of XDRs onto Mac Pro 7,1. It very likely is not all that huge. More than likely driver work on a AMD GPU update would attached to non Apple monitors far more so than Apple ones. There is no huge XDR synergy here. Next gen XDR will most likely be sold paired up to M-series Macs. The XDR generating AMD sales is a 'tail wags dog' story


By the way. Has Apple ever offered new hardware for machines it no longer sells?

That was another side effect of MPX on 2019. Those cards would not fit in obsolete 2012 models.
There occasionally used to be some stuff that 'happened to work' , but certified from Apple ... no .
 
The closest precedent I can think of is offering RX580 support on 5,1 machines.

The Polaris (of which 580 was member) was used in other Mac systems than the just the Mac Pro. Those other systems were major drivers in support. The 5,1 is mainly covered because Apple treated the 2012 as different than the 2010 to artificially extend the vintage/obsolete countdown clock. In part, because the MP 2013 was gong to linger for about 5-6 years.



Presumably updated AMD drivers would also effect Intel eGPU users.

Why? they don't use the same compilers/uptimizers for low level machine code.


Would Apple produce a new display that's TB3 compatible? IIRC a number of the TB4 displays from 3rd parties didn't work properly on TB3 machines.

The MBA M1/M2/M3 were all TBv3. Apple's best selling systems. If the Studio Display just goes mini-LED with no increase in resolution then TBv3 constraints would still work. And a Studio Display that got even more expensive probably isn't helpful in keeping the numbers sold higher. I don't think the XDR can squat on resolution though.
 
Apple were not aiming really high end, like Supermicro/Tyan, but more into HP and Dell workstations. Apple did a lot fancy stuff in the 2000s that was rare outside workstation territory. Dual CPUs, ECC-memory, hardware raid, fibre channel storage, the dabble into ZFS, and the xserve environment. So there were certainly some high ambitions, but maybe not fully backed as a longterm strategy?
True, though even with Dell and HP you had workstations across a wide range of capabilities. I don't think Apple ever went with a Xeon above the 5000 range (3000 range for single processor and 5000 for dual). And I think those were based on i7...

I'd probably consider them low to midrange workstations (which are still freaky powerful), but I'm not in that space so please correct me if I'm wrong there.
 
  • Like
Reactions: Varmann
True, though even with Dell and HP you had workstations across a wide range of capabilities. I don't think Apple ever went with a Xeon above the 5000 range (3000 range for single processor and 5000 for dual). And I think those were based on i7...

I'd probably consider them low to midrange workstations (which are still freaky powerful), but I'm not in that space so please correct me if I'm wrong there.
Yes, I agree. but I think Apple easily could have moved into high end territory given what they did show off in those days.

But as I see it the most crucial parts for a successful workstation line were lacking. Longterm commitment and trustworthiness.

You can not do longterm planning about hardware solutions with Apple, it may change completely in a year or two. At my workplace we had maybe 50 workstations in the 90s. First SGI, later Linux. They took a lot effort to administer. Just a change to a new Linux version took two years. All homebrewed mission critical subsystems had to be thoroughly tested, or replaced. Even with a longterm clear roadmap it was a challenge. Apple secrecy and rapid technology shifts would have made it a nightmare.
 
The main thing that kept the 2008,2009,2010 was that the mini, iMac were so much below them in terms of power.
After that the iMac especially started to catch up with the lower end meaning less people bought Mac Pro. Running a 2009 mini then bought a 2010 Quad Mac Pro in 2013 second hand and upgraded to Hexacore and 680GTX.
However much of the Mac Pro sold were the entry level and the market for those was cut out and so numbers of Mac Pro shifted dropped.
The move with the 7,1 to a much higher starting point reflected that effectively the old equiv of the quad was dropped and the smaller numbers expected to be shipped.

Lots of the photographers that used the lower entry MP moved to the 27” 5K iMacs.
I dropped from the Mac Pro to a hack with iMac 2019 specs and now run a Base Mac Studio, and next time expect to use a Mini with the Pro SoC.

Much in the same way that Nvidia sold x200, x400, x600 and then top x800, nowadays the 200 and 400 equivalents are not there and just the 600/800 there.
 
I'm late checking in on this thread. I actively recommend people not buy a Mac Pro 2019 because of the dead-end GPU upgrades. I made a video about it when I upgraded my 8-Core to 16-Core, and I think I mentioned it's not a great buy.

I honestly debated selling off my Mac Pro 2019 for a Mac Studio M4 Max or MacBook Pro M4 Max as the fate of the Mac Pro 2019 is ???? Will it continue to be supported past 2025? After that, the value absolutely craters. I can't decide, though, if I want to hold onto my Mac Pro 2019 for legacy Intel or not, as I still have a mixed collection of x86 VSTs in Cubase that I haven't paid to upgrade or that are no longer being upgraded.

I've also made the moves to prepare for a post-Mac Pro future, as I now have a PCIe enclosure from Sonnet, Sonnet Multi-SSD carries, and a NAS. My work-provided M4 Pro is a beast. I could ditch my Mac Pro 2019 tomorrow if I was willing to write off some old audio projects with my M1 Max as I'd just yank the 6 SSDs and the respective PCIe cards, slot them in the Sonnet case and call it a day. Thunderbolt 5 makes the limited appeal of the Apple Silicon Mac Pro even less as you can get nearish 4x PCIe 4.0 NVMe speeds and SSDs are one of the few things that can make use of the PCIe slots.
 
Performance tests for the Mac Studio M3 Ultra have appeared on PugetBench.
Were those the binned version or the unbinned? Seems like people were getting their binned M3 Ultras much quicker. My guess is a few particular flows, the Mac Studio M3 Ultra shines as for budget AI workloads the $9.6k for 512 GB of potential VRAM is kinda steal. That's 24ish 4090s/5090s or 12ish A100s (granted they have much more compute).

You can run the full 670b param Deepseek R1 (Q4) on a Mac Studio, no clue how many tokens it'll crap out but that's damn crazy.

/edit: well just saw MacRumors reported on this.
 
Last edited:
the Mac Studio M3 Ultra shines as for budget AI workloads the $9.6k for 512 GB of potential VRAM is kinda steal. That's 24ish 4090s/5090s or 12ish A100s (granted they have much more compute).

The M2 Ultra only had ~32 GB of effective VRAM because of memory throughput limitations. I expect the M3 Ultra will have similar issues.
 
My friends have all already switched from MacPro to PC . Of course they continue to use macOS but on PC. When they can buy six similar computers instead of one MacPro there can be only one choice
 
The M2 Ultra only had ~32 GB of effective VRAM because of memory throughput limitations. I expect the M3 Ultra will have similar issues.
Dave 2D just showed off running more than the artificial cap of 384 GB of VRAM, it looked like just a terminal command when I skimmed the video, he and another guy both ran the 400~ GB Q4 R1
 
So I just got a M3 Ultra base Studio. Below are some real world results for 3D rendering and comparison benchmarks.

2019 MacPro with 2x 6800XT Duo's
Self Build PC with 1914900KF + 4070 Ti Super + A4000
Mac Studio M3 Ultra Base 28cpu/60gpu

Redshift Benchmark:
MacPro: 2:30
Studio: 2:03
PC: 1:37

Cinebench Multicore CPU
MacPro : 1169
Studio: 2666
PC: 2016

Rendering an actual scene in Cinema 4D and Redshift. The scene has a lot of translucent objects, think trees, grass etc so not the easiest thing to render. Output was at 3000px X 3000px.

Mac Studio 19:12 RTX ON
Mac Pro 2019 41:48
Windows 11 + 4070 Ti Super 14:58
Windows 11 + 4070 Ti Super + A4000 10:21

Whilst the PC is obviously the fastest, the Studio is the nicest to work on, Mac OS being a big part of that for me. When rendering at full power it's totally silent, and it's sitting right in front of me. The PC on other hand is water cooled, in a giant but high end case and its still noisy on CPU, quiet on GPU. The Studio is silent regardless. As for heat, the Studio doesn't generate much heat at all, especially compared to the heat from the 6800XT's on the MacPro.

I have gotten close to maxing out the 96Gb ram when rendering, but I think that's more of a software issue right now as C4D doesn't always release the memory when it should. Sure, 128Gb or more would be nice but as soon as you upgrade this thing it gets expensive real fast, so my goal was to see how good the base config is.

It runs 3DSMax via Parallels just fine once you change the display driver in Max. Totally can model in it, but you wouldn't render in it. That would be the case on an VM anyway.

I have large amounts of RAW photos stored in Capture One libraries. Scrolling through a catalog and generating previews is super snappy, much quicker than the MacPro as is nearly everything TBH.

Only downside so far has been storage. I had a 12Tb NVME array via the Sonnetech M2 4x4 card in the MacPro. Read/write speeds were outrageous. As far as I can tell there are no TB 5 NVME Raid enclosures right now. There are a hand of TB5 external drives that come with storage, but I just need the enclosure. I ended up ordering the TB5 single NVME enclosure from Acasis, hopefully that will arrive soon.


I had toyed with returning the Studio and upgrading to the 80GPU chip, but I don't think it's worth it when I have a PC sitting here that would still out perform it.

So in conclusion, for my workflow, the base 2025 Studio Ultra is perfectly capable and a noticeable improvement over the MacPro. A little bit of me still wishes there was hope for my MacPro, that there was something I could do that would make it super awesome, but the reality of it is is that the 2025 Studio out performs in every metric I've tested. If those MPX modules had been equipped with hardware ray-tracing it might be a different story. It's crazy that this little box by comparison can do what it does.
 
So I just got a M3 Ultra base Studio. Below are some real world results for 3D rendering and comparison benchmarks.

2019 MacPro with 2x 6800XT Duo's
Self Build PC with 1914900KF + 4070 Ti Super + A4000
Mac Studio M3 Ultra Base 28cpu/60gpu

Redshift Benchmark:
MacPro: 2:30
Studio: 2:03
PC: 1:37

Cinebench Multicore CPU
MacPro : 1169
Studio: 2666
PC: 2016

Rendering an actual scene in Cinema 4D and Redshift. The scene has a lot of translucent objects, think trees, grass etc so not the easiest thing to render. Output was at 3000px X 3000px.

Mac Studio 19:12 RTX ON
Mac Pro 2019 41:48
Windows 11 + 4070 Ti Super 14:58
Windows 11 + 4070 Ti Super + A4000 10:21

Whilst the PC is obviously the fastest, the Studio is the nicest to work on, Mac OS being a big part of that for me. When rendering at full power it's totally silent, and it's sitting right in front of me. The PC on other hand is water cooled, in a giant but high end case and its still noisy on CPU, quiet on GPU. The Studio is silent regardless. As for heat, the Studio doesn't generate much heat at all, especially compared to the heat from the 6800XT's on the MacPro.

I have gotten close to maxing out the 96Gb ram when rendering, but I think that's more of a software issue right now as C4D doesn't always release the memory when it should. Sure, 128Gb or more would be nice but as soon as you upgrade this thing it gets expensive real fast, so my goal was to see how good the base config is.

It runs 3DSMax via Parallels just fine once you change the display driver in Max. Totally can model in it, but you wouldn't render in it. That would be the case on an VM anyway.

I have large amounts of RAW photos stored in Capture One libraries. Scrolling through a catalog and generating previews is super snappy, much quicker than the MacPro as is nearly everything TBH.

Only downside so far has been storage. I had a 12Tb NVME array via the Sonnetech M2 4x4 card in the MacPro. Read/write speeds were outrageous. As far as I can tell there are no TB 5 NVME Raid enclosures right now. There are a hand of TB5 external drives that come with storage, but I just need the enclosure. I ended up ordering the TB5 single NVME enclosure from Acasis, hopefully that will arrive soon.


I had toyed with returning the Studio and upgrading to the 80GPU chip, but I don't think it's worth it when I have a PC sitting here that would still out perform it.

So in conclusion, for my workflow, the base 2025 Studio Ultra is perfectly capable and a noticeable improvement over the MacPro. A little bit of me still wishes there was hope for my MacPro, that there was something I could do that would make it super awesome, but the reality of it is is that the 2025 Studio out performs in every metric I've tested. If those MPX modules had been equipped with hardware ray-tracing it might be a different story. It's crazy that this little box by comparison can do what it does.
Wow, great report. I’m glad to see you’re seeing pretty great gains in c4d.

Please keep us updated on any more tests, including the storage.
 
On the other hand, realtime 3D performance (rather than queued rendering) looks to be pretty laughable: 1440p 30 with ray tracing is traditionally Xbox territory.

Gives some perspective on why there's no tethered VR headset from Apple.

1742552771523.png
 
  • Like
Reactions: Flint Ironstag
On the other hand, realtime 3D performance (rather than queued rendering) looks to be pretty laughable: 1440p 30 with ray tracing is traditionally Xbox territory.

Gives some perspective on why there's no tethered VR headset from Apple.

View attachment 2494400

I think your above assertion may be wrong. If I recall, Linus did a test recently of the 5090 trying to do ray tracing and basically you could not get over 20fps with ray tracing and good quality. As soon as you turn ray tracing off, the game he was testing was Indiana Jones if I recall, the FPS went through the roof. So for whatever reason, ray tracing just is not practical even with the best GPUs on PCs.

However with both ray tracing turn off on the mac and on the PC, the PC will smoke the mac.

However, for whatever weird reason, ray tracing doesnt slow down the mac as much as it does the PC. So it's a weird scenario where ray tracing FPS throughput is better on the mac, albeit it still at middling FPS, than on the PC, but the PC cannot even attain middling FPS with ray tracing fully on.
 
I think your above assertion may be wrong. If I recall, Linus did a test recently of the 5090 trying to do ray tracing and basically you could not get over 20fps with ray tracing and good quality. As soon as you turn ray tracing off, the game he was testing was Indiana Jones if I recall, the FPS went through the roof. So for whatever reason, ray tracing just is not practical even with the best GPUs on PCs.

However with both ray tracing turn off on the mac and on the PC, the PC will smoke the mac.

However, for whatever weird reason, ray tracing doesnt slow down the mac as much as it does the PC. So it's a weird scenario where ray tracing FPS throughput is better on the mac, albeit it still at middling FPS, than on the PC, but the PC cannot even attain middling FPS with ray tracing fully on.

How does your theory square with this?

1742575172609.png
 
How does your theory square with this?

View attachment 2494469
Maybe Im misunderstanding things in that image, but I'm not sure it's an apples to apples thing.

It certainly looks better. Im just going off numbers I've seen the M4max achieve with ray tracing and what Linus reported on Indiana Jones and cyberpunk games. I think this is the spot in the video where I saw that:


I'm not following the game scene too closely, but I did see more than a few videos where people couldnt get good frame rates with ray tracing on, on the PC side of things. So I think there is a lot of granularity in 'how much ray tracing' settings are being turned on, how much AI interpolation is being used to compensate for the ray tracing not being able to fully render.

So getting back to your very good point/question above, I do not know what 'sacrifices' in the settings are being made for assassins creed on the PC and on the mac. And I would think the less surprising outcome should be that the big giant GPUs on the PC should outperform the mac. Further, perhaps what I'm noticing is some anomalous differences in ray tracing settings from the PC to the mac. But at least on a few occasions, I'm noticing that turning ray tracing on on the mac (with M4max) does not impact frame rate downward nearly as much as it seems to on the PC. Perhaps that's some function of different settings for rendering, or maybe I'm missing something, but I think there may be something to it.

I would love it if some super game nerds that know all the setting variations would try to do an apples to apples test on the games and relative "might" of apple ray tracing throughput to PC side. I could easily be wrong and maybe it's not even close. But several mac game evaluation videos I've seen reported that flipping ray tracing on/off didnt impact frame rate too much, which I found noteworthy.
 
I would love it if some super game nerds that know all the setting variations would try to do an apples to apples test on the games and relative "might" of apple ray tracing throughput to PC side. I could easily be wrong and maybe it's not even close. But several mac game evaluation videos I've seen reported that flipping ray tracing on/off didnt impact frame rate too much, which I found noteworthy.

Well certainly with Assassins Creed it LOOKS like the absolute best the most "powerful" most expensive systems Apple has ever fielded... are barely keeping up with consumer-level midrange PCs when it comes to realtime 3D performance.

Which gels with something I've always suspected - that the having a giant memory set available to the GPU (albeit with a bandwidth limitation that means it only really has access to a faction of that in a high performance space), isn't really an advantage, if the GPU itself can't get through the work fast enough to outpace the load / unload cycle of a discreet GPU.

Frankly, I think that's the story of Apple Silicon - tenuous, error bars scale advantages, with very concrete disadvantages.
 
I’m bit torn about upgrading to a new M model, assuming a new one is coming out this year. Unless M4 is substantially better than my 2019 Mac Pro, I probably won’t consider it. I mean, it’s not like Apple Silicon Mac Pros have external GPU support either.

Aside from that, I’m also using my Mac Pro to do any windows exclusive tasks (including some gaming), if there are any.
There are a few reviews and videos where the MacBook Pro Max outperformed a top of the line Razor laptop with a mobile 4090 in every just about every benchmark, including the MBPm outperforming while running Windows applications via Parallels. Cannot imagine what a Studio Ultra can do!
 
Well certainly with Assassins Creed it LOOKS like the absolute best the most "powerful" most expensive systems Apple has ever fielded... are barely keeping up with consumer-level midrange PCs when it comes to realtime 3D performance.

Which gels with something I've always suspected - that the having a giant memory set available to the GPU (albeit with a bandwidth limitation that means it only really has access to a faction of that in a high performance space), isn't really an advantage, if the GPU itself can't get through the work fast enough to outpace the load / unload cycle of a discreet GPU.

Frankly, I think that's the story of Apple Silicon - tenuous, error bars scale advantages, with very concrete disadvantages.

It may suggest that to some extent, but you seemed to have ignored my point but I blathered about it a lot so probably my bad.

The point is, turning ray tracing on on a PC destroys it's frame rate. Like its an 80% lower frame rate. Where turning on ray tracing on a mac seems to impact it minimally. Maybe 15-20% from what I recall.

So that seems to suggest that something about apple's design is way more robust in how it handles ray tracing. Maybe its memory bandwidth, maybe its just the algorithm in their hardware. Of course it could also be because there is some cheat or worse software switch being used on the mac or the fidelity is turned down or worse.

But it's a very interesting phenomenon as to why does ray tracing seem to take such a huge hit on the PC side and seemingly small hit on the mac side?
 
The point is, turning ray tracing on on a PC destroys it's frame rate. Like its an 80% lower frame rate. Where turning on ray tracing on a mac seems to impact it minimally. Maybe 15-20% from what I recall.

right... but with a much higher quality ray tracing on the PC, at a much higher resolution, you still get double the framerate of the "most powerful", most expensive mac Apple has ever made.

What this demonstrates isn't how badly the PC handles raytracing, but how ridiculously well it does non-ray-traced graphics (while still being ~3x as powerful for ray tracing as a the most expensive Mac)

What's the point of being more efficient at doing something, if you can't actually clear the minimum bar to do the thing? That's like arguing about how fuel efficient a car is, but it can't physically climb a hill on mountainous roads at the speed limit without holding up traffic (I live this).
 
It is known that Mac is the worst platform for games. Many titles simply do not exist . Even Linux is already better for these purposes. If you want to play at top speed, you buy a PC not Mac it is obvious. Mac graphics work best in Metal environment . But games are optimized for Windows and there is nothing you can do about it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.