There simply is not enough information available yet to give a clear answer to your question, but I'll try to cover a bit:Something I'm puzzled about.
It was always the cheap low end machines that used to have Graphics systems that had to share graphics memory with main memory.
Of course, you also then have the problem of the memory you wish to hold programs in, being taken to be used by the graphics chip.
We then moved onto higher end graphics cards which then had their own super fast dedicated graphics to stop them hitting the processor.
So the processor could get on with what it was good at, with it's own memory, and the Graphics cards to storm ahead with their own dedicated memory also.
This allowed graphics performance to storm ahead.
Now we seem to be going back to the shared memory again.
Can anyone explain this, why this is not going backwards?
First of all, that's not the timeline. If you go back far enough (while still staying in the era of 3D), integrated graphics acceleration didn't exist, only separate GPUs. Then integrated graphics started to become a thing on the low end and ultralight end of the spectrum. That latter part is extremely relevant to the M1.
The MacBook Air, for example, has never had a GPU with independent graphics memory. The last 13" MacBook Pro to have a dedicated GPU was I believe 2010, a decade ago. The last Mac Mini that had a dedicated GPU was released in 2011.
The M1, at this point, is only present in those three product lines. So at this point the integrated M1 GPU has replaced only the integrated Intel GPU. Whether integrated graphics might have some advantages or if dedicated GPUs are better in every possible situation is a moot point at this stage, because Apple hasn't shipped anything with a dramatically different GPU/CPU architecture than what they have for nearly a decade.
You can't go backwards if it's the same as what you had before--the only question right now is whether the M1 can outpace the Intel integrated GPU in the products it is replacing an Intel CPU in. (I'm leaving out the eGPU support issue here, which is separate.)
Now, if you want to get into hypotheticals, the M2, or M1X, or P1, or whatever ends up in the 16" MacBook Pro and high-end big-screen iMac, or eventual pro products, may or may not come with a dedicated 3rd party GPU. If they have dedicated GPUs, again it's a moot point. If they don't, then we can get into whether this is a huge step back, whether it's a modest step back that only affects a tiny sliver of pro users, or if there are genuine advantages to it in at least some use cases.
One thing to note on hypothetical advantages or disadvantages of the M1 architecture: GPUs don't just generate 3D graphics anymore. They are often used as general-purpose coprocessors for certain operations they're very good at. For this sort of general purpose computing, you need to move data from the CPU to the GPU in order for the GPU to work on it, which induces overhead.
In a fully integrated system like the M1, if the architecture is designed to take advantage of it, that latency disappears, because everything is using the same RAM. So hypothetically speaking, there could be performance gains in some areas of GPU computing thanks to the shared RAM. How much will also depend in some part on just how fast the RAM integrated into the M1 is.
I don't know whether Intel integrated graphics currently take advantage of that capability or not on MacOS. I actually don't even know whether the M1 does for sure, but I've read that's the case.
Last edited: