Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
A stock MacBook Air can record 32 channels of audio without hitting 10-20% CPU. If the cards are Avid HDX then they're perhaps using a lof of the DSP on the cards, in which case there is even less need for the CPU capabilities of the Mac Pro. Over spec'd.
Recording 128 MADI or Dante channels at 24bit/192 kHz is still pretty taxing, even on my new Mac Pro.
 
I don't know!

The only Mac Pro I ever saw in actual use was a recording booth for a full orchestra that my ex played in. It had x4 PCIe cards with 8 analog inputs per card, and they would record all 32 channels at every performance, then mix. Is there anything else than can do that?
We can spot mic every instrument in the orchestra now. Most orchestras are around 100 instruments. Plus, we can do 5.1 and ATMOS mixdowns with mics placed throughout the hall.

The reality we have with the new Mac Pro is that I can record over 128 channels at 24/192 over MADI or Dante without breaking a sweat.

Thunderbolt is great, but when we get into 4K and 8K video and 128 channels of audio, we need the bandwidth of PCIe.
 
Needs huge memory support (2-4TB+) for full LLM implementations, ideally as high bandwidth VRAM...

NPU processing is going to be even more important than CPU or GPU processing.

So the machines have to have upgradable memory, to expand into multi-terabyte sizes. But since the Mx CPUs memories are directly attached to the CPU die, it likely means that memory is would have to be expandable alongside the CPUs. In other words, instead of adding DIMMs to expand memory, you add CPU modules to expand memory, and there's some kind of interconnect between CPUs to access each other memories/caches/etc.
Being that AI crunching on a large scale is being done better by companies such as Cerebras, who basically use an entire wafer to build a massive cluster and use local memory in each subprocessing unit, I don't see anything that would suggest they are going outside of the consumer direct market with their offerings. I know they are using Apple chips for AI servers for themselves, but unless they unleash some racked version w/ some new way of linking them as you say, I don't see it being a market they are focused on at all. You sound like you understand the needs much better than I do. I'm nobody. They could design a massive AI core count chip just for the serious people, that doesn't use a wafer, but maybe 8 units or something per wafer, large local memory model also. But like - Who is the market for this? It's an interesting time to try something for them. I'm always surprised they don't take more risks with the commercial market and see what sticks.
 
Being that AI crunching on a large scale is being done better by companies such as Cerebras, who basically use an entire wafer to build a massive cluster and use local memory in each subprocessing unit, I don't see anything that would suggest they are going outside of the consumer direct market with their offerings. I know they are using Apple chips for AI servers for themselves, but unless they unleash some racked version w/ some new way of linking them as you say, I don't see it being a market they are focused on at all. You sound like you understand the needs much better than I do. I'm nobody. They could design a massive AI core count chip just for the serious people, that doesn't use a wafer, but maybe 8 units or something per wafer, large local memory model also. But like - Who is the market for this? It's an interesting time to try something for them. I'm always surprised they don't take more risks with the commercial market and see what sticks.
Cerebras are only for major VC-backed firms. An individual developer isn't going to use Cereberas.

A Mac Pro with 2-4TB of memory can be used by any of the thousands (millions?) of developers that are now getting into LLM for their own use or app development.
 
Throughout 2022, there were rumors about Apple developing an "M2 Extreme" chip that doubled the capabilities of the M2 Ultra chip for superlative performance. The chip option was apparently cancelled because "[b]ased on Apple's current pricing structure, an M2 Extreme version of a Mac Pro would probably cost at least $10,000—without any other upgrades—making it an extraordinarily niche product that likely isn't worth the development costs, engineering resources and production bandwidth it would require." Considering that Apple was weighing up an all-new top-tier Apple silicon chip prior to the release of the current model, it isn't out of the question that the company could revisit the idea in 2026 or beyond.

M2 was produced on TSMC N5 tech.

Wafer costs for N3 , N2 , etc are only higher than N5. [ N3E is more affordable than N3B , but still substantive higher than N5] . So if it was cost prohibitive before , then it is more likely it would be even more cost prohibitive now.
Higher taxes/tariffs and inflation only throw further 'gas on the fire'.

If it didn't work before because it is too expensive then need to point at something that will make it radically less expensive. Otherwise just pointing at something that isn't coming.... over and over again. 2026, bleeding edge wafer costs will still be going up.


Apple could also revitalize the Mac Pro by offering new optional MPX modules that integrate with the Apple silicon architecture, such as a next-generation Afterburner accelerator card.

The M2 Max/Ultra absolutely 'smoke' the Afterburner card. The newest stuff even more so. There is no new Afterburner card coming. The functionality is integrated into the main die now. There are no needs for 'cards' for new products.

MPX likewise is a solution in search of a problem. Likewise Thunderbolt controllers are built into the primary SoC die now. Don't need to provision 'extra' PCI-e lanes out to more edge located discrete Intel thunderbolt chips anymore.

There are some folks in the Windows world trying to grow a connector that mainly just supplies all power to the to add in card ( avoiding fire busting power cables problems ), but MPX doesn't really add much there.


The future Mac Pro problem is far more software than hardware. Apple has put the 'old school' kernel extension model into deprecation mode.


DriverKit in , IOKit out. Apple has targeted a sizable chunk of the leagcy Pci-e I/O card as being the market for the Mac Pro ( have 1-3 $3K A/V cards ... buy the Mac Pro). Where it somewhat looks like Apple is not paying attention is both getting the current cards over the DriverKit and in getting new cards to come into the ecosystem.

Apple hasn't let in discrete GPUs , but if going to exclude that probably need to do some work to backfill if push that out. Or allow more cards to be directly I/O mapped to client VMs that do want to do the work.

Likewise Apple needs to do more to develop the secondary PCI-e storage market instead of trying to dig a bigger moat around their SSDs. That is likely more a software problem than a hardware one.

This software stuff goes past the Mac Pro. Quirky drivers doesn't help a Mac Mini with a TB PCI-e enclosure either if the card interface has hiccups.
 
  • Like
Reactions: tomchr9
Must be low volume item for it not to be refreshed for so long. Might as well wait until end of 2025 for production of TSMC 2nm to come online.
 
It could also support up to 512GB of memory, a notable increase over the current 192GB limit.

512GB and no ECC is skating out onto even thinner ice. They have skated into the zone where not keeping up with integrity on a single die (and/or package) system. Need to solve that before moving onto a multiple die one.
 
  • Like
Reactions: MacHeritage
Apple has four desktop computer lines. With Thunderbolt 5, does it need the Mac Pro? Apple could sell a Thunderbolt 5 expansion enclosure that looks like the Mac Studio for adding storage and whatever else people add to Mac Pros.
Fair, but TB is a tiny pipe compared to PCIe. I would never have enough bandwidth for multiple uncompressed 4K streams and 128x128 audio streams through TB. Also, PCIe is a solid connection, USB-C is an awful connector with no built in cable captivity.
I agree with Zipo; TB has what? 4 PCIe lanes? The Mac Pro has two x16 slots, four x8 slots and a half-length x4 slot for the Apple I/O card, so much more bandwidth.

Plus, I'm a bit of a neat freak. I'd much rather have all my expansion inside my single computer, rather than a bunch of expansion boxes littering my desk, or needing a big rack mount enclosure
 
Something that can compete in 3D rendering with an Nvidia card would be nice, given the price tag…!
you will be waiting a very long time I think. Bite the bullet and use PC's for that work - I did and was the right decision. In all honesty, I am better off with a very high spec PC for 3D and rendering and a base MBP, and just remote into the PC. My mac has just become a glorified media compiler and player for the PC's output. Apples loss I guess in me speccing up the hardware as I have in the past, but the reality is my PC has earned me a lot more money than the mac.

I could have done all my work with the PC and basically none of it on the mac with the same outcome and I am not saying this to gloat, as I would far prefer to do it all on a mac.

My future macs will be off the shelf ones on discount when I can get them, and future PC's will be as maxed out as I can afford.
 
  • Like
Reactions: Woyzeck
Lots of developers looking at machines with huge memories because of LLM (AI large language model) work. The tiniest models can fit in 16GB of memory but the largest ones need terabytes, and lots of bandwidth to the GPU to process it.

And that's just for inference (using models).

Single user inference needs terabytes? Where?

Inference load for several people aggregated onto a single machine perhaps, but individual model?
Single machine modeling a network set up ( dev instance , server instance , AI instance , virtual network instance ) , but that isn't the inference model soaking up the space.
 
Apple doesn't want to be in the business of catering to niche professionals which is a difficult market and why they have precisely moving away from that for decades at this point. You don't obtain the top marketcap in the world by serving professionals who need pcie expansion.

At this point the Mac Pro for Apple is more of a halo product that every once in a while they can show it off and it's good for branding to still have some presence for that segment of users. Issue now is that Apple Silicone has really made it difficult to differentiate this especially with a Mac Studio in the mix. I would be willing to bet Apple would love to just kill off the current Mac Pro and rename the Mac Studio as the Mac Pro(similar to what they did back in 2013).


The question becomes
1. does Apple just keep the Mac Pro the same as it is....basically keep it as the niche Mac Studio Tower
2. do they kill it off at some point and Mac Studio and they merge?
3. does Apple pull the halo card and at WWDC they unveil some kind of Mac Pro only monolithic chip or control die multiple GPU/CPU dies that grabs headlines and makes its rounds in the mainstream media.

Option 1 is most likely I would think.
 
  • Like
Reactions: Varmann
Single user inference needs terabytes? Where?

Inference load for several people aggregated onto a single machine perhaps, but individual model?
Single machine modeling a network set up ( dev instance , server instance , AI instance , virtual network instance ) , but that isn't the inference model soaking up the space.
The full undistilled DeepSeek R1 can take up to 1.4 TB, and who knows what the future holds.
 
The Pro’s real problem is it’s positioning next to the M2 Ultra Studio.

I think they’ll retire the Studio, because if they can redesign the Mac Pro to be far smaller, then they’ll solve the problem of having a compact desktop with the benefit of internal expansion.

Remember: the Studio exists because the Mini simply didn’t have the thermal headroom for a powerful SoC. But M4/Pro is changing that and it’s happening in a tiny new Mac. The leap in performance even from M2 is astonishing, and this begs the question of how many Studio owners could now work from an M4 Pro Mini.

A new Mac Pro with Max and Ultra options would not only have a more affordable entry price but also a smaller footprint.
 
Apple doesn't want to be in the business of catering to niche professionals which is a difficult market and why they have precisely moving away from that for decades at this point. You don't obtain the top marketcap in the world by serving professionals who need pcie expansion.

Apple could drop both the Mac Pro and Mac Studio and still have a top 5 market cap. It has extremely little to do with PCI-e slots.

The major crux of the issue is that the Mac Pro (at least since the Intel era when this general market segment product got that name) has been powered by a server focus chip that was tossed into the Personal computer market as a secondary, side-effect. The server deployments primarily paid for the chip development. The workstation application was just a 'side show'.

Intel , AMD , Nvidia... they don't kill off high end chips in a year. Even if they might come out with something new they'd contine to sell the older ones (at lower volumes).

The basic Mx chip is shared with the iPad. The major problem for the Mac Pro 'bigger than Ultra' SoC is that it has no 'hand me down' product to go into. The M2 went Mac , iPad Pro , iPad Air. Similar on the iPhone side. The Axx chips trickle down into iPads , AppleTV etc. The watch chips trickle down into Home Pods . etc.

So one major problem here is that if give Mac Pro a chip that Mac STudio doesn't have where does it go?

[ There is lots of handwaving in some corners to the "Apple AI server". But do they really want to deliver 'hand me down' product there? Also a service that does't generate revenue. No revenue bigger driver looking for a 'cheaper' solution as opposed to the most expensive option. ]


At this point the Mac Pro for Apple is more of a halo product that every once in a while they can show it off and it's good for branding to still have some presence for that segment of users.

The Mac Pro is more a 'hobby' product than a halo product. 2013 -> 2019 2019 -> 2023 ... that isn't 'halo'. That is do it occasionally when get some spare time.


Issue now is that Apple Silicone has really made it difficult to differentiate this especially with a Mac Studio in the mix. I would be willing to bet Apple would love to just kill off the current Mac Pro and rename the Mac Studio as the Mac Pro(similar to what they did back in 2013).

More so Apple would like to put the performance of the legacy Mac Pro into a MBP 16" and folks just buy that.
The Studio is handy to do , but the bulk of the Mac line up is laptops. Apple is trying to make best laptop possible. The desktop line up somewhat just falls out of that as a side effect.


The question becomes
1. does Apple just keep the Mac Pro the same as it is....basically keep it as the niche Mac Studio Tower
2. do they kill it off at some point and Mac Studio and they merge?
3. does Apple pull the halo card and at WWDC they unveil some kind of Mac Pro only monolithic chip or control die multiple GPU/CPU dies that grabs headlines and makes its rounds in the mainstream media.

Option 1 is most likely I would think.

Mac Pro isn't as much of a Mac studio tower as it could be because there is no Mn Max option Mac Pro.

The Mac Pro is segmented off substantially more than just "more slots". [ Apple shifted the MP 2019 up 100% for entry price $3K -> $6K. ]

The more major issue is I/O differentiation isn't as large as it could be. It isn't that it is a "tower" format .
Folks can stuff a Mini or Studio into a xMac enclosure.



If Apple a "Mac on a card" option that slotted into a Mac Pro that would be a larger value add than the "tower" aspect. What is missing is something that could be added that some "scale up" heft to the box. Use the PCI-e connection to very fast network the instances together and have a cluster in a box.

As long as the Mac Pro is mainly aiming at the very high priced A/V card bought years and years ago that don't want to replace then it has major issues keeping folks happy who are outside that market.

Defacto they already have option 3. There is silicon (and connectors) in the Ultra that the Studio can't physically use. It is far I/O as opposed to segmenting off the CPU/GPU cores, but can't get to it on Studio.

What Apple did with M2 Ultra was a bit asymmetric. x8 and x16 PCI-e v4. two x16 PCI-e v4 or two x8 PCI-e v5 would be a substantive improvement. Adds some better software driver and/or direct I/O virtualization support and the Mac Pro would be on much better footing to a better value proposition.
 
The Pro’s real problem is it’s positioning next to the M2 Ultra Studio.

I think they’ll retire the Studio, because if they can redesign the Mac Pro to be far smaller, then they’ll solve the problem of having a compact desktop with the benefit of internal expansion.

Highly unlikely to happen. The desktop footprint of the Mac Studio is likely substantially smaller than a Mac Pro primarily just shorten in height. The Mac Pro isn't really around to fit the constraints that Apple is targeting for literal 'desktop'. The Mac Pro comes in a rack model. That rack and deskside are more so the design constraints being likely targeted.

If shrank the Mac Pro footprint to the size of Mac Studio and then put "Mac Pro" like proportional height restrictions on it ... likely would end up with something with "too few" slots for lots of folks. Mac Pro has to do better than


or a Mini Pro in a xMac Mini enclosure. Fewer slots and Thunderbolt 5 on the Mini/Studio and the gap is decreasing for lots more folks.

Lots of folks want to throw retirement 'hate' at the Studio so Apple will bring back the large screen iMac. That is likely doubtful.


Remember: the Studio exists because the Mini simply didn’t have the thermal headroom for a powerful SoC.

Simply not true. The Studio primarily exists because Apple dumped the large screen iMac. Never was suppose to be a Mini or Mac Pro replacement at all. It is closer to being an iMac Pro replacement than anything else.

Apple , 'save the planet' forces that pushed them toward decoupling the large screen from the iMac aren't going away. Apple simply is not herding a larger number of customers than wanted into iMacs.

Similarly the lower thermal headroom goals are more so getting more folks to buy MBP 16" with a Max. More folks out of desktops and into zone where whole system portability offsets the trade-off on integrating the screen.
Apple is shifting more workload to laptops. (which is most of what they sell in PC space. ). As long as Apple continues to shot for the thermal issues of the MBP 16", the Studio's constraints won't be very hard to stay in compliance.

But M4/Pro is changing that and it’s happening in a tiny new Mac. The leap in performance even from M2 is astonishing.

A new Mac Pro with Max and Ultra options would not only have a more affordable entry price but also a smaller footprint.

The M4 Max in a Mac Studio will also come with Thunderbolt v5. That will take more competitive pressure off the Studio and raises it higher on the Mac Pro. ( especially if they don't fix the backhaul bandwidth to the PCI-e switch for the slots. )

Dragging more workload back to the MBP 16" pulls it back through the Mini , Mini Pro , and Mac Studio. That doesn't diminish the studio at all. Those desktop are all substantively lower price points than the Mac Pro.
 
I could be wrong, but was the title of the article perhaps supposed to be "

Where Does Mac Pro Go Next After M2 Ultra?"

 
The full undistilled DeepSeek R1 can take up to 1.4 TB, and who knows what the future holds.

That isn't a single, uniform access, memory address space.

Model VariantParameters (B)VRAM Requirement (GB)Recommended GPU Configuration
DeepSeek-R1671~1,342Multi-GPU setup (e.g., NVIDIA A100 80GB ×16)



Pretty likely at some point 'piled higher and deeper' of just even more random data isn't going to work as well.

And terabytes with no ECC is further out in the weeds where skipping over real issues just to chase narrow corner cases.
 
That isn't a single, uniform access, memory address space.

Model VariantParameters (B)VRAM Requirement (GB)Recommended GPU Configuration
DeepSeek-R1671~1,342Multi-GPU setup (e.g., NVIDIA A100 80GB ×16)


Pretty likely at some point 'piled higher and deeper' of just even more random data isn't going to work as well.

And terabytes with no ECC is further out in the weeds where skipping over real issues just to chase narrow corner cases.
It's because there isn't a GPU available with a single 1.4TB memory address space.

You can use EPYC CPUS with a single 1.4TB ECC address space, but they're a lot slower than GPUs (mostly due to the reduced memory bandwidth compared to VRAM)

Also, LLMs are somewhat resilient to single-bit errors, so ECC isn't as important for them. A single bit error isn't going to change the output of your LLM in any meaningful way. In fact, that's what distilled models are generally doing anyways, to reduce the memory requirement they get rid of half or more of the bits for a reduced quality of result.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.