Yeah it was a typo. When you need to efficiently use the RAM, Apple silicon is way ahead. It is not a simple Math of Unified memory takes more memory because GPU uses the same RAM. I run Linux workstation with 4090, more often than not, because of GPU’s inability to access RAM, there is additional processing that happens to prep the batches to be sent to GPU VRAM. Constant loading, offloading of this becomes a huge bottle neck and GPU performance takes a hit, because it is waiting on batches to be ready for processing.
Last thing I care about is wearing my SSD few years earlier instead of 15 Years. If I have to get Nvidia with more than 50GB memory, I am looking at 10K for two RTX A6000 GPU, or 25K for a A100. Or 8-10 bucks an hr for A100 or H100 cluster, to say the least. My MBP cost me like 3700, cheaper than other options. I hated when Apple removed the upgradability for MacPro and Mac Mini, but I may consider if they can get the unified memory to 512Gb on Mac Pro. Compared to these costs, Apple RAM is a bargain, more like whole Mac line is a bargain If you need more than 24 GB of GPU memory.
I don’t have to worry about having System RAM, VRAM and so on with Unified memory. With the speeds of SSD going up, I would love to use more Swap if I need on occasion. My cloud costs have drastically gone down, as most of my dev/testing work is on Mac. And when I am ready, I run in cloud for minimum time needed, unlike before most of my tuning happened in cloud.