Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Velli

macrumors 68000
Original poster
Feb 1, 2013
1,539
2,247
I know little about multi-CPU and how to spread computing over multiple systems. But, as a layman, it seems to me that AI computing is about how many computing units (CPU/GPU) you can throw at it. We know NVIDIA is the name of the game in AI. However, Apple Silicon delivers more power per watt, which is not only important for environmental reasons, but for heat dissipation which I imagine is important in datacenters.

This got me thinking: Could Apple be designing future Apple Silicon generations to functions as clusters, so that you could have a Mac Pro hold one or more boards with a bunch of Apple Silicon chips? Kind of similar to the Afterburner card, but stacked with AS chips? It seems logical to me that Apple would go that way, and it would provide an interesting use case for Mac Pro, but are there techical reasons this would be a Bad Idea™️?
 
Could they do this? Sure. I've mentioned in various places, that with the unified memory architecture, apple has advantages over nvidia in some ways. Why not lean into that and build an AI mac. The Mac Pro is only whithering on the vine, why not re-design/redeploy it as an AI specific computer as you mention
 
Apple Silicon's ML cores use lower precision, and don't support certain layers. Presumably the biggest reason they are so efficient is because of the half-precision, which isn't desirable when training.

To be honest, I don't think Apple really cares about that segment. They don't make Xserves anymore, OS X Server is discontinued, and new versions of macOS are hostile to automation and remote management. Plus, I'd imagine that they'd want as much of their TSMC capacity going towards iPhone chips.

Even if they did try to compete with NVIDIA on a hardware level, they have very little in the way of software. They do have Metal, which is more than can be said for AMD, but most developers do not bother with it. CUDA is used everywhere, it's supported by all the major ML libraries.

I'd love to see a modern Xserve with a ton of e-cores, though. For workloads that need many low-power cores.
 
  • Like
Reactions: maflynn
This is a complex topic.

When we talk about the efficiency of Apple hardware, we usually talk about CPUs (and to lesser degree GPUs). For AI work, Apple GPUs are currently much less efficient than Nvidia (since the later has dedicated AI processing units).

Apple currently has two technologies for AI-related processing. One is the matrix coprocessor integrated with the CPU (AMX/SVE), and another is the streaming ML accelerator (NPU). The NPU is indeed very power-efficient, but generally optimized for some common use cases like on-device image processing and other small models. It is not clear that either of these technologies can be scaled meaningfully to data center use. Btw, there is a lot of evidence that Apple is working on GPU matrix acceleration units, which might even get announced today at the event.

The big question is what kind of workflows do you have in mind when you talk about data centers and AI, and what would be the usage model? I don’t see much sense for Apple in competing with Nvidia in the general data center space, but they might have custom AI hardware for in-house use (especially with their focus on secure AI). They already are supposed to use M2 chips for st least some server-side priced ding, even though tust family is not the best fit for AI work.

I do expect Apple computers to become much more attractive for local ML work and development in the coming years. They don’t need to reach Nvidia’s levels performance to become a good choice for AI scientists and engineers either. Just enough to be usable, and there are some smart optimizations they are working on that will likely allow them to punch above their weight.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.