Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

binba

macrumors newbie
Original poster
Sources: WSJ
Reuters https://www.reuters.com/business/apple-begins-shipping-ai-servers-houston-factory-2025-10-23/

No, these are not available for sale to anyone, but with the baseline obsession over every notch in every bezel Apple makes, I'm flummoxed how these week-old photos seem to have gone virtually unnoticed.

These are the AI servers that have been reported about (and yes, discussed here heavily… that they may be making too many of).

It's the first time we're seeing Apple-made servers since the XServe was discontinued in 2011. Kind of exciting to see Apple Silicon inside a proper rackmount form factor. I wouldn't mind a couple at my shop, running Server.app and OD again ;-)
But also, people have often been very curious about "what does it look like in Apple's own data centers? What do they use internally for their hardware?" Well, in small part, here's your answer.


1772834200104.png


Screenshot 2026-03-06 at 1.23.00 PM.png


Screenshot 2026-03-06 at 1.23.26 PM.png


Screenshot 2026-03-06 at 1.29.02 PM.png
 
I'm no expert, but I see eight pairs of heatsinks in your screenshots -- so that is 8 Ultras per rack? The 2023 Mac Pro heatsink has similar proportions.

The I/O looks simple, just 2x Ethernet and 2x some other connector, maybe an SFP variant.

The best view is here, it shows the fins on the heatsink -- I'll bet the individual pictured is training people in Houston -- they go in two at a time:

Screenshot 2026-03-11 at 10.31.44 AM.png
 
  • Like
Reactions: AltecX
I'm no expert, but I see eight pairs of heatsinks in your screenshots -- so that is 8 Ultras per rack? The 2023 Mac Pro heatsink has similar proportions.

Does the Ultra need a pair of heatsinks? ( Mac Pro has one.

apple-manual-heatsink-screws.jpg


)


Either 8 Ultras or 16 Maxes (maybe if run hot Ultras ). If trying to service 1,000,000's of iPhones might want 4 x 4 Maxes (1-4 phones / 'column' ) . as oppose just 4 x 2 Ultras (1-2 phones / 'column' ).


The I/O looks simple, just 2x Ethernet and 2x some other connector, maybe an SFP variant.

One pair is likely workload to broad internet. The other are likely node management (on a physically ' air gapped' network). . Again there appear to be multiple nodes inside the boxs so would need multiple management in (especially if want redundancy).


The best view is here, it shows the fins on the heatsink -- I'll bet the individual pictured is training people in Houston -- they go in two at a time:

I suspect that each of those four 'columns' is a custom motherboard where the 4 Maxes are connected in Apple's talked about 4 node cluster set up via the logic board itself ( no 'evil' , loose wires in Apple's 'war' on wires).. ( The 4 cluster ML showed up in a WWDC years ago. The refined approach showed up with the TBv5 augments again a couple of months ago. ). Each two boards are hooked to a Ethernet switching board to access the 'world'. ( or there is a redundandant .. each board hooked to two switching boards so if any one switch fails still can talk outside. )

[ It does appear that they have 'folded' some of the support chips needed for each M-series subnode up parallel to the heatsink. Safe some 2D footprint space and also side-effect of creating a 'channel'/'duct' to blow air down to the heatsinks farther away from the fans. ]





If there is large, unexpected growth in LLM memory footprint then the system would lean more heavily on the interconnection network between the SoC's in the 'column'. If M2 era networking limitations then that could be as big an issue as the age of the cores.

Similarly, if the local SSD capacity is limited just need more 'scratch' working space for intermediate results. (and the base image of non-user data gets bigger. )

P.S. Looking at all of the extensive custom logical board , storage , network work done inside of this box makes no new Mac Pro not surprising at all. Any Mac Pro design resources probably were assigned to this . And with M2 limitations probably any M4-M5 era would require another round.

P.P.S. The SoC clustering is more 'inside the box' focused than across system units. The bandwidth to the outside world doesn't seem that high. ( no way these are large scale training system nodes. These are hyper oriented to a certain size inference workload. ). That is fine because that is what PCC OS is geared to, but is got the sizing wrong that would bring issues.
 
Last edited:
These look very similar to the patent published a short while ago: https://patentscope.wipo.int/search/en/detail.jsf?docId=US469223044&_cid=P22-MMMP4H-82720-1

Maybe someone with time and background can see what they find 🙂

very quickly skimmed. (will look more later when have more time).

Holy Mac Pro 2013 flashback Batman. Shades of coupled heat sources to a unified core thermal heatsink.
Overall chassis layout matches in Figure 2 and Figure 3. I thought the SoC was under the heatsinks
(appear vertical Figure 4). They have pragmatically coupled a SoC logic board , vertical 'card' to either side of a shared heatsink. [ if not directly physically coupled they are facing each other in very close proximity. The two thermal gradients are pointing at one another. ] Before folks go berserk, they do not look like standard PCIe cards.

Carrier Board ( 'column' ) has eight SoCs. So 32 in a chassis. (won't be surprised if these are Max SoCs. 3+ KWatts in power). Can possibly just yank card if a single SoC logicboard component has major fail.

Fig 6 the 'world' Ethernet are redundant access through a switch. Of course, now have 32 'computers' coming out through same switch. Each of the Carrier Boards aggregates through a switch also.

[ didn't make my why through if that network is admin daisy chained network or cluster network. Presumed admin network because that is what the patent is about; administration of racks of these chassis. Skimming that aspect
was confusing because lots of "what is exactly is new here'. ]
 
only one power plug?

I think the objective is/was to fail over to another standby chassis if the power supply failed. And that the power consumption is capped in the energy efficienty range for the chassis. Because the SoCs are not relatively extremely expensive, the strategy seems to be to deploy a 'ton' of them in relatively more affordable rack units.

Each one of these enclosures has multiple single points of failure.


SSEHQHMXD5KDJJ3FA5LSFSQYQI.jpg

There are two regular RJ-45 jacks. ( blue , standard Ethernet above). Not sure if Apple is calling it 'IPMI', but effectively same role (it is node/server management network).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.