Other than physical size, shouldn't be much of a problem. We have several small machines with 2x and 4x GPU configuration and they work just fine. Larger machines are usually RTX 5000/6000 with single or multi GPU configuration as well. The rest (non workstation) is in our data center running in the cluster (A100, Hx00, Bx00) for the heavy lifting. Still wondering how you end up with the 50k equivalent?Why stop at that, you could run 6 4090x. Good look maintaining and powering that thing.
No experience with the Ryzen. I'm hesitant to invest in new platforms. I thought about getting a few Tenstorrent machines a while ago, but those systems are never about hardware alone. It's also software. Everyone doing research not limited to very specific things is heavily invested in the Nvidia software ecosystem. And while those Tenstorrent cards are nice, it's just painful to go somewhere else (same for Ryzen). We bought Mac Studios for AI experiments (lots of cheap memory) and it backfired. Most of the machines are now running CI pipelines and the rest is used for student experiments. Our cluster with a few hundred Pis is getting more use. While most of the staff are running MacBooks as personal machines for daily work, they're complimented by Workstations with Nvidia GPUs under the desk and the rest are servers.And then there's the AMD Ryzen AI Max+ 395 which goes up to 128GB of unified memory. How does its iGPU stack up against Apple's?
While 128GB or 512GB is nice to have, Apple needs to up the game with the next generation. M5 needs 2TB. If they can offer that for 20k to 30k, that would allow everyone to have a desktop machine and run it with something like Kimi K2 locally at the cost of additional overhead to maintain compatibility. Of course it's not a solution for everyone. As soon as compute is involved, there's no way around Nvidia.