My workstation has 4090, and it used to be 3090 a few weeks back. More power generates more heat. Those things throttle when running serious AI stuff and shut down a few times a week. After Apple released core-ml conversion tools, my M1 Max MacBook pro is a decent substitute, with no heat has never throttled or shut down on me. The AMD thread ripper 3960X CPU is a furnace under heavy loads.For laptops, I get it. ARM offers nice battery life, but a Mac Pro has no battery life.
Unified memory is a big deal, often, my 4090 runs out of memory, but My M1 max with 64 GB unified memory works just fine though a bit slower. I wonder if Apple wants to compete with NVIDIA A100 or other high-end GPUs, where the real training and model generation happens. I would love to see a Mac pro with unified memory of 256 Gb or more; that would be a game-changer.
M1 Max is slightly slower but only option with memory intensive stuff. I dont want to use more expensive A100 in the cloud for anything other than training.