Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They also have other useful features like surge-protected USB ports, per-port MMUs, memory controller QoS, and M5 GPU just doubled the integer multiplication throughout - but you won’t find any of these things on the marketing sheet. Not a strong argument IMO.
Perhaps Apple has implemented full end-to-end ECC RAM without telling anybody. That would not be an argument against it. On the contrary. It would show that Apple also believes it's a good idea to have.
 
When they rolled out the new Mac Pro using apple silicon I was shocked that they used the same enclosure. On one level I understand the logic in using an existing enclosure it really highlighted the deficiencies of this $6,000 - $10,000+ computer. With Apple Silicon and most everything already soldered onto the logic board the size, cooling, and expansion bays were useless in the mac pro.

I love the design of the case, and I was looking for a knock off to build my own pc for years, so its not like I have anything against the design - its fantastic.
In 5-10 years you can get them dirt cheap from OWC.
 
  • Like
Reactions: maflynn
Just a few comments on this:

- What makes you think that Apple wants or needs to train models on their own hardware (inference is a different matter)?
- M3 Ultra is slow for ML because it still lacks ML acceleration. M5 Ultra might be less slow.
- The latest macOS beta introduces Infiniband support, which is the protocol Nvidia uses to build large AI clusters. This would allow you to connect multiple Studios together and use them as a distributed ML accelerator at a lower price than an equivalent hypothetical Mac Pro. And it is very possible that this is what Apple will use internally to link multiple Max or Ultra class chips into coherent compute clusters.
Your posts make me smarter, Leman--they're objective, full of expertise and insight, and clear about where expertise ends. The opposite of most MR posts which make me dumber.
 
The comparisons you mentioned is totally biased.
ChatGPT is trained using nVidia chips. They didn’t need to design their own chip, and neither does Apple.

Whether Apple chooses to or not is their own decision, its build versus buy, but it is not a necessity for them to make their own server chip.
 
  • Haha
Reactions: 1d1otic
ChatGPT is trained using nVidia chips. They didn’t need to design their own chip, and neither does Apple.

Whether Apple chooses to or not is their own decision, its build versus buy, but it is not a necessity for them to make their own server chip.
We are talking about Apple, not OpenAI. Apple is using their own closed-ecosystem with their own hardware and software. That's a huge difference. Yeah, they could use other services but eventually, they would develop and use their own. Besides, tell that to Apple Intelligence.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.