They could, but then they lose out on the advantages that make AS feel so fast - the memory being on the chip and not having to travel through a bus.
That being said, a lot of those performance improvements come from the unified memory structure, but I would gladly give the slight performance increase to upgrade my memory when I need to.
They could just make the bus wide enough to match M2 Max. I know it's not an amazing comparison, but quad-channel memory on IBM's POWER9 from 2017 is 120GB/s, with standard old registered ECC DDR4 DIMMs running at only 2133MT/s. Higher than M2 small, which uses IIRC DDR5-5600MT/s. Power10 from 2020, albiet with a non-standard OMI serial memory bus, got 818GB/s with modular memory (M2 Max is ~400GB/s). Ampere Altra uses standard ECC DDR4, albiet running at 3200MT/s, to get a 230GB/s memory bus. Or so, that's a number I calculated myself and not an official one (their officially-stated memory bandwidth is "high"). Most amd64 processors only have two memory channels, but that's not an inherent quality. Intel's relatively-contemporary E5-2690 had quad-channel; sure, it only had 80GB/s memory bandwidth but as IBM Power suggests that's not a hard and fast rule -- especially when getting away from amd64.
That would introduce a bit of latency, but realistically not enough to matter for... anyone; AS's speed is really more about sheer bandwidth and energy efficiency, letting them pack in essentially a desktop CPU into a laptop's TDP. The main area AS does flagrantly faster than amd64 is in sustained, computationally heavy workloads where after the initial RAM latency (which can honestly be measured in nanoseconds[1]), it really has no effect. Human time perception is measured in milliseconds[2], meaning the main source of latency is in the OS itself[3], and not the hardware underneath it. People (if a niche group -- OS 9 Lives still counts 7,500 registered accounts) use Mac OS 9 for audio production for a reason; anecdotally, it feels to me instantaneous compared to Mac OS X, and that's with a ~50-70 ms latency. And keep in mind, the Mac Pro is not a conusmer product -- the consumer space has the majority of the latency-sensitive applications; I doubt Pixar is going to care if their renders start 75 ns later than the enter key is hit. And if you don't need integrated graphics, which I doubt any firm, studio, or otherwise group working with 3D graphics would, the unified memory structure instantly loses the main thing it speeds up.
I can't even really come up with a good use case for instantaneous memory access, I guess either if you're a stockbroker or if you're NASA and the US government gave you a few cheese cubes and some 1959 pennies, and their resale value wasn't enough to get a minicomputer or proper HPC cluster for your interstellar communications room, so you had to make do with some Mac Pros. While sure, 75 ns might be a while in CPU cycle time, it's none at all for the end user. I think the best use case for memory on-die in a Mac Pro would be like has been suggested -- using that memory as an extended cache. 16GB of cache would be an insane amount and for some workflows would still be enough to destroy any amd64 big-iron offering. But ultimately the real reason for the non upgradeability is so that they make more money from the people willing to pay for more RAM and storage, either because they're forced into it via some software they can't replace or because they willingly go along with it.
[1] DRAM modules themselves, at least consumer ones, typically have a latency around 10 ns give or take 10; total CPU-to-DRAM time is around 75 ns on typical Ryzen home systems.
[2] This is, as anything related to biology or humans is, somewhat complicated. There's various factors that go into this, and even in the same person a bunch of variables -- even smell can affect time perception, and audio and visual time perception can differ. The old standard number was ~100 ms, but that was regarding reaction time, a more likely number admittedly according to anecdote, would be in the range of 10 ms. This would make sense as 60 Hz is 16.667 ms, and anecdotally, I notice gains in smoothness in a display up to about 85 Hz, 11.8 ms, after which the gains start to die off. That's not to say that that difference between 85 Hz and 120 Hz is imperceptible, I would still prefer the latter and notice some difference, but the reasoning starts changing to simply being more up to date visually, so more recent information is received every cycle as it's highly improbable that your display and your brain are synchronized and neither will drift.
[3] https://danluu.com/input-lag/
That would introduce a bit of latency, but realistically not enough to matter for... anyone; AS's speed is really more about sheer bandwidth and energy efficiency, letting them pack in essentially a desktop CPU into a laptop's TDP. The main area AS does flagrantly faster than amd64 is in sustained, computationally heavy workloads where after the initial RAM latency (which can honestly be measured in nanoseconds[1]), it really has no effect. Human time perception is measured in milliseconds[2], meaning the main source of latency is in the OS itself[3], and not the hardware underneath it. People (if a niche group -- OS 9 Lives still counts 7,500 registered accounts) use Mac OS 9 for audio production for a reason; anecdotally, it feels to me instantaneous compared to Mac OS X, and that's with a ~50-70 ms latency. And keep in mind, the Mac Pro is not a conusmer product -- the consumer space has the majority of the latency-sensitive applications; I doubt Pixar is going to care if their renders start 75 ns later than the enter key is hit. And if you don't need integrated graphics, which I doubt any firm, studio, or otherwise group working with 3D graphics would, the unified memory structure instantly loses the main thing it speeds up.
I can't even really come up with a good use case for instantaneous memory access, I guess either if you're a stockbroker or if you're NASA and the US government gave you a few cheese cubes and some 1959 pennies, and their resale value wasn't enough to get a minicomputer or proper HPC cluster for your interstellar communications room, so you had to make do with some Mac Pros. While sure, 75 ns might be a while in CPU cycle time, it's none at all for the end user. I think the best use case for memory on-die in a Mac Pro would be like has been suggested -- using that memory as an extended cache. 16GB of cache would be an insane amount and for some workflows would still be enough to destroy any amd64 big-iron offering. But ultimately the real reason for the non upgradeability is so that they make more money from the people willing to pay for more RAM and storage, either because they're forced into it via some software they can't replace or because they willingly go along with it.
[1] DRAM modules themselves, at least consumer ones, typically have a latency around 10 ns give or take 10; total CPU-to-DRAM time is around 75 ns on typical Ryzen home systems.
[2] This is, as anything related to biology or humans is, somewhat complicated. There's various factors that go into this, and even in the same person a bunch of variables -- even smell can affect time perception, and audio and visual time perception can differ. The old standard number was ~100 ms, but that was regarding reaction time, a more likely number admittedly according to anecdote, would be in the range of 10 ms. This would make sense as 60 Hz is 16.667 ms, and anecdotally, I notice gains in smoothness in a display up to about 85 Hz, 11.8 ms, after which the gains start to die off. That's not to say that that difference between 85 Hz and 120 Hz is imperceptible, I would still prefer the latter and notice some difference, but the reasoning starts changing to simply being more up to date visually, so more recent information is received every cycle as it's highly improbable that your display and your brain are synchronized and neither will drift.
[3] https://danluu.com/input-lag/
As far as Apple is concerned - you don't exist as an Apple customer, if you are still trying to keep an 11 year old computer running.
At some point, you have to let go of old hardware - you truly don't realize how far behind you are.
If it still works, and does what you want as well as you want it to, replacing it is just a waste of resources and energy. I personally use various PowerPC Macs for daily purposes, though my new daily driver is a temporary Surface Laptop 3 15" with Fedora and my current project PC is a Sharp PC-MM2 running OpenBSD. I'd say computers from 2003 are just barely worth replacing (and either selling or keeping in some other role) right now, and that timescale is likely to grow as amd64 and silicon both continue to plateau, and more and more work is done on cell phones that are about as powerful as a decade-old laptop. I don't think we'll see any real change year over year like generic-you did through the 1990s until gallium nitride comes out, and even then the full potential won't come until we get off the '70s tower of duct tape onto OpenPOWER, ARM, and (somewhat begrudgingly on my end) RISC-V.
Exactly. While their OS ecosystem is convenient and mostly efficient (software quality control somewhat deteriorating as of late), it does lock you in as there is no hardware competition.
Unfortunately, the same can be said about the hardware. AS is really, really fast for mobile hardware, and uses really little power. To the point that it's an amazing platform for Linux and OpenBSD -- even without full hardware support, both get over 10 hours, often over 15, of battery life, while still being faster than macOS when both are measured, even before there was GPU support. After the GPU driver was written, off-the-cuff battery tests showed 8 hours of 1080p 3D gameplay. But at the same time, their hardware is so locked down it presents a dilemma of either rewarding them for their wasteful, greedy tactic or buying some Wintel garbage that's IME-laden and can barely last a half-hour charge.
Last edited: