Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
24 vs 768 is a significant difference no matter how you cut it.

32 vs. 768, but yes, of course that's significant. It just isn't orders of magnitude, and it's also a rather extreme example. You aren't going to see many installations in the world that actually use so many lanes.
 
  • Like
Reactions: Omega Mac
Well, the M4 Max, yes. But compared to the M3 Max, it came five quarters later, when previously, that delay was only one quarter.
Right, but the point is there was no Mac Studio to put it in, so they didn’t build it at that time. They only built it later to go alongside the M4 Max Mac Studio. By then it was too late to launch a Mac Pro, with the M5 Pro/Max/Ultra watershed on the horizon (rumored to introduce integrated chips and more advanced packaging).
 
Last edited:
This was a goal of Steve Jobs. It was one of the reasons when he returned to launch the G4 Cube to see how his idea would fare. He never liked anyone to have the ability to change out the internals of his machines. Apple Silicon favors his "closed system" design that he envisioned the Macintosh would be in his eyes.
Does not compute. All the evidence suggests that Jobs' position was that consumer-focussed "appliance" computers didn't need user-upgradeable parts but pro-focussed computers absolutely did. Unfortunately, after his illness and death, his wisdom got replaced by a 4-bullet Keynote slide and such subtleties were lost.

Jobs' NeXT cube had expansion cards. The G4/G5 Power Mac towers released under Jobs not only had PCI/AGP/PCIe slots but they went the extra mile in offering tool-free access to slots, RAM and drives (far better than their pre-Jobs predecessors). The original Mac Pro was insanely modular and expandable - again, largely tool-free. Head and shoulders above most tower PCs when it came to ease of adding a card, swapping a drive or adding RAM. Even the 27" iMacs had a handy little hatch for upgrading the RAM. Those designs would not have happened under a CEO who thought all computers should be sealed units.

Heck, even the Apple II had internal slots and a clip-on lid...

It's fairly well documented that the Cube failed because it was overpriced and had manufacturing faults. Lots of people swooned over it until they saw the price tag and the cracks in the case. Tower Macs that you could strip to the bones without so much as a screwdriver continued to be made. Jobs' spiritual follow up to the Cube was the Mac Mini, which given it's still going strong 20 years later can almost certainly be deemed a success.

The Trashcan appeared after Jobs' death, probably long after his influence had waned and, even then, might have worked as a "Final Cut" appliance if Apple had kept the "real" Mac Pro up-to-date alongside it - although it was a dead-end design with no CPU/GPU upgrade roadmap.

Even in the Intel era, though, Apple was struggling to differentiate the Mac Pro tower from 101 other x86 towers which offered far more flexibility when it came to tailoring the machine for specialist needs. Apple won its niche in graphics/video/audio back in the late 80s when the the PC standard of the day was a kludgy 8/16-bit processor running a CP/M knockoff OS that simply couldn't do that sort of thing. Unfortunately, much as we love to hate Windows it is (when MS isn't forcing updates and unwanted AI on users) now a powerful 64-bit OS that can, and does, "do that sort of thing". Mac's "ease of use" doesn't count for so much in the Pro arena.

Even the PC world is leaning towards small-form-factor & laptops. People I know who always used to have a PCI/PCIe tower as their daily driver are switching to laptops and mini PCs for most of their work. The use-case for powerful general-purpose modular workstations is going away, squeezed between increasingly powerful laptops & Mini PCs on the one end and on-demand cloud access to serious big iron on the other.

As for upgradability - the typical all-in-one, non-upgradeable logic board for an Apple Silicon machine that you "have to throw away" is smaller and less complex than the typical PCIe GPU or other card that you used to "have to throw away" - and I suspect that a complete, working, old-model Mac is more re-sellable than an individual, obsolete component. My #1 beef is the soldered-in SSD, not because I want to upgrade, but because flash memory is perishable in a way that RAM and other solid-state electronics isn't. Apple have fixed that in the Studio and current mini (even if you can only officially buy like-for-like replacements and not upgrades).

Also, I knew Arm and RISCV based machines would need a different type of thought process when it came to upgradeability versus X86. Even adding in a basic M.2 slot on a Raspberry Pi required a daughter board. I hope the Mac Pro could evolve over time rather than the Studio replacing it.
That's nothing to do with ARM or RISCV per se. - Both architectures can support PCIe but you're looking at system-on-a-chip implementations designed for embedded computing and set-top-boxes. Server class ARM implementations from Ampere etc. support PCIe:


FWIW, the Raspberry Pi 5 now has a PCIe header on the board - you still need an adapter board for the obvious reason that there's no space on the Pi board for a M.2 - let alone a PCIe x4 - socket.

Apple Silicon is designed for laptops and tablets, which don't need much PCIe beyond the SSD interface and a few internal lanes for ethernet controllers etc. because that's where Apple make their money. It's also where Apple Silicon's power efficiency really pays off, where Apple's integrated GPU has an edge over other mobile GPUs - and it's far more important for Apple to have a competitive edge in their core market than a "me-too" PCIe tower that will only ever perform as well as NVIDIA's finest GPU/TPUs.
 
"Orders of magnitude" suggests multiple. the Xeon 6788P has 96; the EPYC 9755 has 128. The M3 Ultra appears to have 32.

It's a wild exaggeration to call four times as much "orders of magnitude more".
Being able to plug things in to those 768 lanes is also a huge advantage if you want Converged Ethernet for stuff like storage or rDMA. Using 768 lanes for GPUs allows for much larger single instance footprints which can be very important.
32 vs. 768, but yes, of course that's significant. It just isn't orders of magnitude, and it's also a rather extreme example. You aren't going to see many installations in the world that actually use so many lanes.
Perhaps I’m misunderstanding something, but @lusty’s premise doesn’t seem like a fair comparison.

Apple’s Max/Ultra don’t use PCIe lanes for GPUs like the Xeon and EPYC do. How many lanes are left after the GPU(s) are slotted in? Apple obviously isn’t trying to compete with those kinds of HPC installations. But most of the Max/Ultra bandwidth goes to unified memory for the SoC, what’s left over for I/O is not comparable to that of a CPU without a GPU. Am I wrong?
 
32 vs. 768, but yes, of course that's significant. It just isn't orders of magnitude, and it's also a rather extreme example. You aren't going to see many installations in the world that actually use so many lanes.
I’ve seen plenty in my career working with data and AI.
 
Perhaps I’m misunderstanding something, but @lusty’s premise doesn’t seem like a fair comparison.

Apple’s Max/Ultra don’t use PCIe lanes for GPUs like the Xeon and EPYC do. How many lanes are left after the GPU(s) are slotted in? Apple obviously isn’t trying to compete with those kinds of HPC installations. But most of the Max/Ultra bandwidth goes to unified memory for the SoC, what’s left over for I/O is not comparable to that of a CPU without a GPU. Am I wrong?
Yes that’s what I’m saying. System on chip essentially means that’s your system, encapsulated with little expandability and limited bandwidth. For consumer devices like Macs that’s perfectly adequate.
If you want to do bigger things though, m series chips just don’t cut it. Additional system bandwidth comes from the processor and only the processor.
 
Does not compute. All the evidence suggests that Jobs' position was that consumer-focussed "appliance" computers didn't need user-upgradeable parts but pro-focussed computers absolutely did. Unfortunately, after his illness and death, his wisdom got replaced by a 4-bullet Keynote slide and such subtleties were lost.

I'm not sure "pro-focussed computers" were ever something he was especially interested in. Sure, he had the NeXT workstations, and he did intro various Power Macs and Mac Pros. But I think to him, those were always a means to an end — and that end was ultimately to have them in appliances.

I don't, for example, think he would've liked the Macintosh II at all. That project happened in part because he wasn't there any more. That was a Gassée thing, not a Jobs thing.

Plus, he didn't care for nostalgia. He chased where the puck was going to be.

So I think in today's world of ARM-based Macs, he'd be even more aggressive about getting rid of the Mac Pro.

Jobs' NeXT cube had expansion cards. The G4/G5 Power Mac towers released under Jobs not only had PCI/AGP/PCIe slots but they went the extra mile in offering tool-free access to slots, RAM and drives (far better than their pre-Jobs predecessors). The original Mac Pro was insanely modular and expandable - again, largely tool-free. Head and shoulders above most tower PCs when it came to ease of adding a card, swapping a drive or adding RAM. Even the 27" iMacs had a handy little hatch for upgrading the RAM. Those designs would not have happened under a CEO who thought all computers should be sealed units.

But those designs happened in an era where desktops with internal expansion were way, way more commonplace. Once laptops, iPods, iPhones, iPads started taking over, that's what he was excited about.

Plus, I think some of the Jobs 2.0-era stuff were concessions to keep people like Bertrand Serlet and Avie Tevanian on board. That's why we saw projects like Xserve, Xgrid, Xsan, the continuation of WebObjects, and so on: Jobs hedged his post-merger bets. Once it had become clear that this was not only generally a shrinking market (see how Sun and SGI were doing, oh, and most CPU architectures! PA-RISC, SPARC, Itanium, … all gone) but also that Apple specifically didn't really get much of a foot in the door, but it very, very much did in other areas, most of those things were killed off.

It's fairly well documented that the Cube failed because it was overpriced and had manufacturing faults. Lots of people swooned over it until they saw the price tag and the cracks in the case. Tower Macs that you could strip to the bones without so much as a screwdriver continued to be made. Jobs' spiritual follow up to the Cube was the Mac Mini, which given it's still going strong 20 years later can almost certainly be deemed a success.

The Trashcan appeared after Jobs' death, probably long after his influence had waned and, even then, might have worked as a "Final Cut" appliance if Apple had kept the "real" Mac Pro up-to-date alongside it - although it was a dead-end design with no CPU/GPU upgrade roadmap.

Arguably, much like the Mac mini steps into the Cube's footsteps, the trash can was a spiritual predecessor of the Studio.

Even the PC world is leaning towards small-form-factor & laptops. People I know who always used to have a PCI/PCIe tower as their daily driver are switching to laptops and mini PCs for most of their work. The use-case for powerful general-purpose modular workstations is going away, squeezed between increasingly powerful laptops & Mini PCs on the one end and on-demand cloud access to serious big iron on the other.

Yes. Businesses generally buy laptops, or they buy today's version of "thin PCs", mini PCs that largely just run RDP or web apps. This doesn't just save space and make them (in the case of laptops) more practical for meetings; it also saves, frankly, on power.

That's nothing to do with ARM or RISCV per se. - Both architectures can support PCIe but you're looking at system-on-a-chip implementations designed for embedded computing and set-top-boxes. Server class ARM implementations from Ampere etc. support PCIe:

Sure, but the device tree isn't as standardized.

Qualcomm Windows laptops seem to use a shim implementation of UEFI to make Windows boot. That implementation isn't good enough to get Linux to fully work. Raspberry (Broadcom) instead uses its own notion of a device tree. Apple, finally, has iBoot, which is a bit of a return to the OpenFirmware approach.

 
Yes that’s what I’m saying. System on chip essentially means that’s your system, encapsulated with little expandability and limited bandwidth. For consumer devices like Macs that’s perfectly adequate.
If you want to do bigger things though, m series chips just don’t cut it. Additional system bandwidth comes from the processor and only the processor.
Maybe Apple are working on a blow them away solution that solves their positioning in the Pro end of the hardware space, and they've managed to keep it really quiet and that is what we might see in the next Mac Pro re-do
 
A general btw point - this is a great topic full of excellent insightful contributions, makes for an informative read. I partucualry enjoy those who know their history and manage to contextualise hardware developments in a very erudite timeline. Splendid stuff and thank you.
 
Last edited:
Apple said this about TB5 for the Ultra Mac Studio. Pretty sure this means the end of the Mac Pro.

Apple willing to lie about TBv5 is indeed a bad sign. It is indicative that the non-technical inmates are running the asylum and doling out Cupertino kool-aid is groupthink objective.


Lower right of the blurb.
The 120Gb/s mode of TBv5 generally makes PCI-e slower , not faster. TBv5 is symmetrically 80Gb/s both ways. There is a 'rob Peter to pay Paul' , asymmetric mode where you steal 40 Gbps from the inbound (to host system) direction and assign it to outbound traffic. 120/40 . 40 Inbound is just as limited at TBv3/4. There is no improvement there. Hence zero rational justification for declaring PCI slot over.... when they were not over with TBv3 (or v2).


TBv5 has x4 PCI-e v4 throughput. That is 64 Gb/s ( or 8GB/s ). That is less than the throughput of a 100GbE link.
The asymmetric 'hocus pocus' is misdirection from pointing out that still have substantive limitation.

The asymmetric mode is primarily a solution for unidirectional video data stream; not PCI-e. Data being sent to a monitor screen is dramatically skewed in one direction. It is more reassurance that things will be OK now that Apple put the only GPU options inside the SoC module. The other major usages for PCI-e slot it does almost nothing for them. Inbound video capture; nope. Inbound network traffic; nope. Normal generally symmetric over time disk storage traffic; nope.

The 1GbE standard was formed in the last century. 10GbE was the 2002-2005 timeframe. Apple thinks these are 'insanely great' standard desktop network solutions in 2026 ( 20+ year old network tech) for $7-10+K workstations. x4 PCI-e v5 SSDs on a v4 bus... where the problem? Multiple v5 SSDS aggregated on a v4 bus ? Where is the problem. No problem... just hand wave at non existent PCI-e bandwidth.

That mindset ... yeah a new Mac Pro is likely going to have problems getting 'green lit'. That it is over a year later and haven't cleaned up their marketing is more telling. I guess someone is waiting on someone to file a formal FTC complaint to fix it. Thunderbolt is a backwards looking in terms of covering technology. ( it is always a step behind where leading edge PCI-e and Video is. ). The Mac Pro has had a forward looking technology element to it. If Apple only wants to look backwards then it is a product mindset mismatch.
 
  • Like
Reactions: Omega Mac
The thing is, a Mac Pro was really there just for expansion and cards are slowly going away. In the modern era there isn't more power to be had, and if there is, it's with a multi-node setup.
 
Replying to those questioning the utility of PCIe slots in the Mac Pro.
Hollywood post-production audio relies heavily on Mac with Avid add-in cards.

I use Mac Studios with third-party expansion via Thunderbolt to save money, but I can sympathize with people concerned about adding several more points of failure when accessorizing a $300,000 audio console. A big studio would definitely prefer a Mac Pro.

Yes it’s a niche market, but it’s one where the users have a strong affinity for the Apple brand.
 
  • Like
Reactions: seek3r
32 vs. 768, but yes, of course that's significant. It just isn't orders of magnitude, and it's also a rather extreme example. You aren't going to see many installations in the world that actually use so many lanes.

You also won't get that many lanes, because the chips need to use some of them to communicate with each other, and you won't get that many lanes on a board.

Never mind that the original point I made was about memory bandwidth, specifically in the context of 2TB being something you don't configure in these systems for RAM because they don't have sufficient bandwidth to make it worth it.

But yes, IO to system devices isn't orders of magnitude either. And it's hilariously more expensive, like I said you could scale out in less power and get more performance for anyone who isn't literally on the scale of google. openAI, etc.
 
Last edited:
Maybe Apple are working on a blow them away solution that solves their positioning in the Pro end of the hardware space, and they've managed to keep it really quiet and that is what we might see in the next Mac Pro re-do
Fingers crossed. They have the capability the question is whether the juice is worth the squeeze
 
Never mind that the original point I made was about memory bandwidth
Memory bandwidth is irrelevant in a system that can’t access data to fill that memory. It’s a useful statistic in a consumer endpoint that needs a snappy interface, but for serious work memory bandwidth isn’t relevant if there isn’t any bandwidth to mass storage outside the box. You can process a tiny amount of data really fast and then wait, or you can process a massive amount of data slightly slower but get more done overall.
Memory is just level three cache. Apple have chosen to put that cache on the die, which is great. They chose not to put any external bandwidth in to feed it. Those are excellent engineering choices for consumer devices, so Apple are bang on for their customers.
In bigger systems memory becomes less relevant, although a bigger buffer is useful to soak up output before transferring to disk. Ability to pull data from storage/network and push it back is more useful here, hence wanting PCIe lanes. Similar when you need to connect a lot of devices like the audio folk.
 
I'd much rather have slower expandable RAM, slower expandable storage and the possibility of PCIe cards being able to be used if I chose so.

Perfect would be if Apple designed a Mac Pro with a swappable Apple Silicon Core. Make a custom high speed bus (or use something already around) that connects to a bridge chip that has PCIe channels.

Maybe have 4 slots that can be CPU/GPU cores using rDMA; that also connects to the PCIe bridge chip.

Back to reality, Apple would never do this because they like changing interfaces and protocols too often.
 
  • Like
Reactions: wyliej

It's too bad Apple didn't go ahead with the rumoured 2022 Intel Mac Pro refresh with Ice Lake Xeon processors. Up to 38 cores, up to 4TB RAM,

Ice Lake Xeon looked good 'on paper'. As delivered it was not all that great. Even Dell and HP avoided that stuff. Ice Lake W-series versus AMD Threadripper of that era was no contest. Ice Lake was hot (ran substantially hotter) , late , and overpriced.

RaWAjbcnvcvZphWPQ4GD7o-798-80.jpeg.webp



There was no creditable rumor for a post 2020 Intel system. After Apple announced they were switch in June 2020 the "Ice Lake" stuff was just some folk just trying to fool themselves. AMD and Intel were in deep battle for the regular server versions of those respective dies. They was no huge market opportunity that Apple was missing out on.

In some alternative universe where Intel delivered relatively bug-free, Ice Lake W series in 2019 (where some 2017 era, hand waving, early roadmaps probably placed it). Apple might have used it. Once it slid into "maybe 2020" and then later it was doomed as long as Apple Silicon got delivered on time (2020).


shipping with AMD RDNA2 GPUs with RNDA3 options added as they became available in 2023,

The rest of the Mac line up had bulk dropped AMD GPUs in 2021. ( MBP 14/16 and basic iMac had transtitioned).
And even bigger unit number cliff fall off with the Mac Studio intro in March 2022. AMD doing drivers for a single system is not creditable. There is exceeding low return on investment there and at the time AMD's opportunities in other spaces were *WAY* higher.


it would have given the Mac Pro a useful niche for high RAM capacity, expandability, and x64 compatibility to make it distinct from the Apple Silicon Mac Studo.

By 2022-23 the dead end for Intel Mac OS variant would have been posted on the wall. Three years of zero 3rd party GPU updates was a clear sign for anyone not trying to fool themselves that there was no long term option there. Whose is spending $10+ K for a dead end ? (not many with sense).

In such a timeline, the 2023 M2 Ultra Mac Pro would not exist and a 2022 Intel Ice Lake Mac Pro would be sold alongside

Apple would willing sell fewer M2 Ultras ... besides the objective is to make less money the chip? [ i.e., make the M2 Ultra production run shorter so have to amortize higher costs over a smaller number of chips.]


the 2022 M1 series Mac Studio and later 2023 M2 series Mac Studio. The Mac Pro could then be completely discontinued in 2025 allowing the Mac Pro line to end cleanly on a strong last Intel iteration just like the Power Mac line ended on a last PPC iteration rather than carrying on as an Apple Silicon afterthought as it does now.

Mac Pro Intel sold in 2025. The operating system dies one year later 2026. Those customer are going to be happy? Probably not. The M2 Ultra Mac Pro probably was originally planned to come 12/2022 ( and Apple would have hit their 2 year schedule). 2023-2026 is a far more gentler cliff to fall off of.


Of course if Apple had introduced a new Intel Mac Pro they would need to maintain Intel support in MacOS for several more years which they don't want to do.

Which is why those post 2020 'rumors' of new Intel product was illusion smoke.
 
  • Like
Reactions: CWallace
When it comes to the clustering and RDMA stack Apple’s introducing the competition isnt 16 lanes of PCIe 5, it’s NDR Infiniband (which it beats)

Not really. There are 100GbE RDMA supported cards for the Mac Pro now (as long as not running macOS :) ** ) In gross bandwidth, TBv5 can't even track 100GbE. NDR Infiniband is 400 Gb/s . TBv5 isn't covering that. (HDR is 200 Gb/s)

If 'beats' is suppose to be latency, then you need to compare only using point-to-point links. An InfiniBand switch is going to add at least an order of magnitude more latency, but it also enables scaling ( which TBv5 does not). If Mac Pro had HDR/NDR drivers then four cards in each of four Mac Pros likely would deliver lower latency is in the same point-to-point cluster of four configuration the TBv5 network is with the Studios. If this is a 'beats' because there aren't any current InfiniBand drivers on Mac Pro ... that really is a different problem.

The only clear 'beats' is on costs ; not performance. Four Thunderbolt ports are included with the Mac Studio (or Mini Pro). Add-in cards would clearly cost more and four relatively expensive ones would cost 'tons' more.


and XDR Infiniband (which is about 40% faster but also isnt heavily deployed anywhere yet and also costs a fortune)

Don't even need XDR. HDR InfiniBand is problematical on a Mac Pro because Apple makes that market 'harder' to enter. HDR still talking faster bandwidth and lower latencies.


** ATTO Fast Frame gen 4. Tech Spec.
https://www.atto.com/products/high-performance-ethernet-nics/

" ... RDMA over Converged Ethernet (RoCE)enables industry-leading low-latency anddecreases CPU utilization on Linux and Windows. ..."

Same card in macOS is hobbled. Not the hardware that is the root cause issue.
 
Last edited:
Replying to those questioning the utility of PCIe slots in the Mac Pro.
Hollywood post-production audio relies heavily on Mac with Avid add-in cards.

I use Mac Studios with third-party expansion via Thunderbolt to save money, but I can sympathize with people concerned about adding several more points of failure when accessorizing a $300,000 audio console. A big studio would definitely prefer a Mac Pro.

Yes it’s a niche market, but it’s one where the users have a strong affinity for the Apple brand.

If I remember correctly at the Mac Pro 2023 introduction Apple flashed a picture of a Mac Pro stuffed with six HDX cards.

AVID's test supported configuration maxes out at three.

Apple's extremely narrow corner case may work, but how many $100K consoles are going to get built. Apple having several of those folks 'parked' on the M2 Ultra version for 6-8 years also means they don't need to do anything either for a long stretch (because that niche isn't moving.)

The problem with these audio cards is that although a physical x4 PCI-e interface lots of them are electrically x1 PCI-e v1.1 in bandwidth. Latency is likely a more sensitive issue, but that probably push a new system development process forward faster.

[ finally found possible low level specs on HDX that suggest x4 PCI-e v1.1 is. It is not quite a x1 link but it is not a modern bandwidth consumer. PCI-e v4 is about eight times PCI-e v1.1 . If the base lanes on the next Mac Pro were upgraded to v5 the gap is even larger overkill. ]
 
Last edited:
Why not just invent a special cable for the studio that when linked with other studios becomes one faster computer that uses all the processors and ram to run one os or a few virtual ones too. They could do so much more than other companies since they control the hardware and software. And it wont compete with a Mac Pro because nobody is going to buy that anyway at numbers bigger than racks of studios.
 
Why not just invent a special cable for the studio that when linked with other studios becomes one faster computer that uses all the processors and ram to run one os or a few virtual ones too. They could do so much more than other companies since they control the hardware and software. And it wont compete with a Mac Pro because nobody is going to buy that anyway at numbers bigger than racks of studios.
Why wouldnt they just include an infiniband port and an on die controller at that point?

Not really. There are 100GbE RDMA supported cards for the Mac Pro now (as long as not running macOS :) ** ) In gross bandwidth, TBv5 can't even track 100GbE. NDR Infiniband is 400 Gb/s . TBv5 isn't covering that. (HDR is 200 Gb/s)
FWIW I dont entirely disagree with you, including your whole post which I cut for brevity here, but for the use cases Apple’s targeting this does, I think, beat IB. For one thing yes, as you mentioned, you dont need to actually set up IB, which is both a pain and costs quite a bit. And as for bandwidth, I believe, though I dont have the gear to test the assumption, that Apple’s dynamically reallocating the bandwidth on the TB5 as needed. Is 120 asymmetric and conditional better than symmetric 100?
On paper, and under a lot of typical applications using IB no, not a chance, but for a lot of the workloads likely to be used on a mac with this? Probably not far off or, potentially, even better.
 
The main reason i haven’t updated my 2019 Mac Pro to a Mac Studio is that I require absolute silence in the office. As far as I’ve heard the Mac Studio fan is quite audible whereas my Pro makes literally zero noise once you’re more than a foot away from it.

Is that still the case with the most recent Studio? Or is it silent at high loads?
I had a loaded 2019 Mac Pro with 2x 6800X duo's. It was quiet for sure, but not silent always silent. I sold it and got an M3 Ultra Studio. It's right on my desk and I rarely hear it. If I do, it's a very faint but brief fan noise when all cores are used for 3D rendering. The MacPro could heat our office as those Duo's got very hot. The Studio is just warm to the touch. Don't regret it one bit.
 
  • Like
Reactions: throAU
Why not just invent a special cable for the studio that when linked with other studios becomes one faster computer that uses all the processors and ram to run one os or a few virtual ones too. They could do so much more than other companies since they control the hardware and software. And it wont compete with a Mac Pro because nobody is going to buy that anyway at numbers bigger than racks of studios.

You just invented infiniband basically; apple are currently doing this with Infiniband over Thunderbolt, but that’s only as fast as low end infiniband.

Either thunderbolt needs to catch up to 400/800 Gigabit, or, more appropriately, Apple don’t bother trying to achieve that on all their devices via a new thunderbolt standard and just adopt infiniband directly on the Studio (or future Mac Pro).

Or, more likely, Apple do nothing and own the prosumer space; they’ve never really played in the datacenter and Nvidia and AMD (not intel with Xeon any more, lol) kinda own that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.