...and is 8Gb enough for basic office use and light photo/video editing.
I don't understand what this practically means... is it that memory is shared with the GPU... or something else.
I am just wondering. 8GB RAM is absolutely fine for ARM based Tablets or phones... but I have absolutely no clue whether it is enough for these devices. I guess the closest check would be (say) an ARM based iPad pro at 6Gb... but nobody really knows how OSX (ARM) on these devices compares with IOS on an iPad.
Thanks - that makes sense. But I presume it also means that some of the 8Gb is being used to service the GPU... rather than dedicated RAM for the graphics.It simply means that all processing components (CPU/GPU/Neural Engine/SSD controller) are equal rights citizens when it comes to RAM. Often, data has to be copied between different types of RAM (or different areas in the RAM) to be used by different processors. Apple Silicon eliminates the need for these copies.
For you as a consumer, this basically means more responsive machine, better performance in content creation software, faster games, and some new applications that were not possible before (like CPU and GPU working on the same data, combining their strengths). Believe me, it's a big thing.
which is no different from those computers that do not have a dedicated GPU, eg Intel based MBA or 13" MBP ...Thanks - that makes sense. But I presume it also means that some of the 8Gb is being used to service the GPU... rather than dedicated RAM for the graphics.
Ehm... what? Unified memory is an alternative name for the technology that is been used since like forever. An integrated GPU uses the system memory as a video memory so the first one acts like a “unified memory”. They aren’t switching to anything, just changing the name of it.And now the memory in the Mac mini is no longer user accessible/upgradable. My initial reaction is that this is a big negative and would prevent me from upgrading my Intel Mac mini to the M1 Mac mini. However, maybe the switch to unified memory makes this not as big of an issue.
Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.Ehm... what? Unified memory is an alternative name for the technology that is been used since like forever. An integrated GPU uses the system memory as a video memory so the first one acts like a “unified memory”. They aren’t switching to anything, just changing the name of it.
So in the old systems with integrated graphics, say you have 16GB of RAM and 3GB are allocated to the integrated GPU, then an asset that is 2GB gets loaded, leaving 11GB of system RAM, and then that asset gets copied to the GPU RAM pool, leaving 1GB of GPU allocated RAM, so that one 2GB asset ends up taking 4GB overall?Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.
So what does this mean? On older integrated devices with shared memory, a game would first load game assets (textures and shaders and geometry and whatnot) into CPU RAM, then copy them into GPU RAM to be rendered. In this device, if a game is native to the M1 architecture, it would load assets into RAM, period, and both the CPU and GPU could access it directly. No need to transfer data back and forth between the two. This is one of the bigger reasons the GPU performance is so much better than previous onboard GPUs.
This should make video editing much more performant, which was shown in the event editing 4k videos.
Note, the new AMD GPUs have a sort-of similar speed advantage when plugged into an AMD motherboard with an AMD CPU, the CPU has full native access to the GPU's 16GB RAM.
Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.
Note, the new AMD GPUs have a sort-of similar speed advantage when plugged into an AMD motherboard with an AMD CPU, the CPU has full native access to the GPU's 16GB RAM.
So in the old systems with integrated graphics, say you have 16GB of RAM and 3GB are allocated to the integrated GPU, then an asset that is 2GB gets loaded, leaving 11GB of system RAM, and then that asset gets copied to the GPU RAM pool, leaving 1GB of GPU allocated RAM, so that one 2GB asset ends up taking 4GB overall?
In a discrete GPU setting, that same 2GB asset will still reside in system RAM, so still eating up 2GB, while also being copied over to GPU VRAM?
So the improvement that we're seeing with apple silicon is that 2GB asset gets loaded once into system unified RAM and then GPU directly renders it from there, so no duplication of assets that was seen in previous integrated GPU cases?
So it seems like with unified memory, then with Apple Silicon, GPU VRAM would essentially be redundant and no longer relevant?
Wow! This brings back memories. I never realized how poor Apple II graphics were compared to Atari 8 bit computers until decades later when I learned how graphics were done on the Atari computers.Kind of like the old 8 bit Atari 800 with its Anti coprocessor. The CPU wrote display lists and data into RAM and Antic used DMA preempting the CPU when it needed to paint a display line.
This was for TV displays. There were some nifty display line and vertical blank interrupts to do things on the fly.