Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deaglecat

macrumors 6502a
Original poster
Mar 9, 2012
641
779
...and is 8Gb enough for basic office use and light photo/video editing.

I don't understand what this practically means... is it that memory is shared with the GPU... or something else.
 
Yes, that is how integrated GPUs function generally, M1 or otherwise. There is no dedicated vRAM. The M1 memory is directly connected to a fabric controller that allocates it between the various cores.
 
That sounds about right. In an article I just read, NVIDIA was using this term to share memory locations between GPU and CPU. Instead of having to move data between different memory, a memory pointer was shared.
But Apple may be also sharing the memory with I/O devices or other devices for direct memory access (DMA).
 
I am just wondering. 8GB RAM is absolutely fine for ARM based Tablets or phones... but I have absolutely no clue whether it is enough for these devices. I guess the closest check would be (say) an ARM based iPad pro at 6Gb... but nobody really knows how OSX (ARM) on these devices compares with IOS on an iPad.

Edit: typo.
 
...and is 8Gb enough for basic office use and light photo/video editing.

I don't understand what this practically means... is it that memory is shared with the GPU... or something else.

It simply means that all processing components (CPU/GPU/Neural Engine/SSD controller) are equal rights citizens when it comes to RAM. Often, data has to be copied between different types of RAM (or different areas in the RAM) to be used by different processors. Apple Silicon eliminates the need for these copies.

For you as a consumer, this basically means more responsive machine, better performance in content creation software, faster games, and some new applications that were not possible before (like CPU and GPU working on the same data, combining their strengths). Believe me, it's a big thing.
 
I am just wondering. 8GB RAM is absolutely fine for ARM based Tablets or phones... but I have absolutely no clue whether it is enough for these devices. I guess the closest check would be (say) an ARM based iPad pro at 6Gb... but nobody really knows how OSX (ARM) on these devices compares with IOS on an iPad.

It won't be much different than 8GB on a regular Intel Mac these days. It is a bit on a lower side, but there are two "buts":

- RAM in Apple Silicon Macs is probably faster
- the SSD is much faster, meaning that swapping will be less noticeable, combine this with hardware memory compression and suddenly 8GB is plenty

For home/office use, I think 8GB is fine. For professional creative work, I'd take 16GB. Personally, I ordered 16GB, because I regularly work with datasets that are multiple GB large and have to be loaded into RAM completely.
 
  • Like
Reactions: deaglecat
It simply means that all processing components (CPU/GPU/Neural Engine/SSD controller) are equal rights citizens when it comes to RAM. Often, data has to be copied between different types of RAM (or different areas in the RAM) to be used by different processors. Apple Silicon eliminates the need for these copies.

For you as a consumer, this basically means more responsive machine, better performance in content creation software, faster games, and some new applications that were not possible before (like CPU and GPU working on the same data, combining their strengths). Believe me, it's a big thing.
Thanks - that makes sense. But I presume it also means that some of the 8Gb is being used to service the GPU... rather than dedicated RAM for the graphics.
 
And now the memory in the Mac mini is no longer user accessible/upgradable. My initial reaction is that this is a big negative and would prevent me from upgrading my Intel Mac mini to the M1 Mac mini. However, maybe the switch to unified memory makes this not as big of an issue.
 
Thanks - that makes sense. But I presume it also means that some of the 8Gb is being used to service the GPU... rather than dedicated RAM for the graphics.
which is no different from those computers that do not have a dedicated GPU, eg Intel based MBA or 13" MBP ...

at this point we can only guess re the RAM usage in real world, I am sure we ill know a whole lot more in a couple weeks when we have many first hand reviews ...
I don't know what exactly you mean with "light photo/video editing", and what potential future plans you might have in that area but I am positive that 8GB will be sufficient for basic office usage ...
 
And now the memory in the Mac mini is no longer user accessible/upgradable. My initial reaction is that this is a big negative and would prevent me from upgrading my Intel Mac mini to the M1 Mac mini. However, maybe the switch to unified memory makes this not as big of an issue.
Ehm... what? Unified memory is an alternative name for the technology that is been used since like forever. An integrated GPU uses the system memory as a video memory so the first one acts like a “unified memory”. They aren’t switching to anything, just changing the name of it.
 
Ehm... what? Unified memory is an alternative name for the technology that is been used since like forever. An integrated GPU uses the system memory as a video memory so the first one acts like a “unified memory”. They aren’t switching to anything, just changing the name of it.
Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.

So what does this mean? On older integrated devices with shared memory, a game would first load game assets (textures and shaders and geometry and whatnot) into CPU RAM, then copy them into GPU RAM to be rendered. In this device, if a game is native to the M1 architecture, it would load assets into RAM, period, and both the CPU and GPU could access it directly. No need to transfer data back and forth between the two. This is one of the bigger reasons the GPU performance is so much better than previous onboard GPUs.

This should make video editing much more performant, which was shown in the event editing 4k videos.

Note, the new AMD GPUs have a sort-of similar speed advantage when plugged into an AMD motherboard with an AMD CPU, the CPU has full native access to the GPU's 16GB RAM.
 
Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.

So what does this mean? On older integrated devices with shared memory, a game would first load game assets (textures and shaders and geometry and whatnot) into CPU RAM, then copy them into GPU RAM to be rendered. In this device, if a game is native to the M1 architecture, it would load assets into RAM, period, and both the CPU and GPU could access it directly. No need to transfer data back and forth between the two. This is one of the bigger reasons the GPU performance is so much better than previous onboard GPUs.

This should make video editing much more performant, which was shown in the event editing 4k videos.

Note, the new AMD GPUs have a sort-of similar speed advantage when plugged into an AMD motherboard with an AMD CPU, the CPU has full native access to the GPU's 16GB RAM.
So in the old systems with integrated graphics, say you have 16GB of RAM and 3GB are allocated to the integrated GPU, then an asset that is 2GB gets loaded, leaving 11GB of system RAM, and then that asset gets copied to the GPU RAM pool, leaving 1GB of GPU allocated RAM, so that one 2GB asset ends up taking 4GB overall?

In a discrete GPU setting, that same 2GB asset will still reside in system RAM, so still eating up 2GB, while also being copied over to GPU VRAM?

So the improvement that we're seeing with apple silicon is that 2GB asset gets loaded once into system unified RAM and then GPU directly renders it from there, so no duplication of assets that was seen in previous integrated GPU cases?

So it seems like with unified memory, then with Apple Silicon, GPU VRAM would essentially be redundant and no longer relevant?
 
Not quite accurate, at least from how Apple described it. The old integrated memory systems still treated the RAM allocated to the GPU as separate from the CPU. It was taken from the same pool but kept separate. From what I understand however, the M1 has RAM that can be read by both CPU and GPU simultaneously.

That might have been the case with some really old systems, but doesn’t really apply to newer Intel and AMD iGPUs from what I understand. We don’t really know what (and if) is different about Apple‘s implementation, except of course the fact that with Apple Silicon, unified memory is a primary feature and everything else is tuned around it. Apple also has a lot of shared cache that allows very fast data synchronization between CPU, GPU, Neural Engine and other processors. I expect it in general to be faster than what we have with other systems, where it’s kind of built out of necessity rather than embraced as an essential foundation of a high performance infrastructure.

Other systems where we gave unified memory enabling high performance are game consoles and some custom supercomputers. Like the Fujitsu A64FX which is essentially a CPU/GPU hybrid (ARM CPU with very wide vector processing units) that uses HMB2 for its memory.

Note, the new AMD GPUs have a sort-of similar speed advantage when plugged into an AMD motherboard with an AMD CPU, the CPU has full native access to the GPU's 16GB RAM.

It doesn’t have native access, it can just execute copies to the GPU memory a bit more efficiently.


So in the old systems with integrated graphics, say you have 16GB of RAM and 3GB are allocated to the integrated GPU, then an asset that is 2GB gets loaded, leaving 11GB of system RAM, and then that asset gets copied to the GPU RAM pool, leaving 1GB of GPU allocated RAM, so that one 2GB asset ends up taking 4GB overall?

Most integrated GPUs can just use that asset directly.

In a discrete GPU setting, that same 2GB asset will still reside in system RAM, so still eating up 2GB, while also being copied over to GPU VRAM?

Sometimes, depending on what you want to do with it. If the GPU driver thinks that you might need to read the data back from the GPU for example, it might device to keep a copy in system memory.

So the improvement that we're seeing with apple silicon is that 2GB asset gets loaded once into system unified RAM and then GPU directly renders it from there, so no duplication of assets that was seen in previous integrated GPU cases?

So it seems like with unified memory, then with Apple Silicon, GPU VRAM would essentially be redundant and no longer relevant?

Apples implementation avoids the need to copy data (caveat: sometimes the data will be optimized for GPU use, which could mean for example applying compression or rearranging the data layout so that the GPU can access it faster). Furthermore, Apple implementation allows very fast data exchange between different processors since they all share the same last level cache. And finally, Apples implementation radically simplifies memory management for the OS, since all memory is just... memory. No matter which processors use a particular RAM page, it can be compressed, swapped out and swapped in as necessary. This makes things simpler, faster and essentially more robust - there is less space for memory management bugs and fewer special cases to take care of.
 
Last edited:
  • Like
Reactions: topcat001
Kind of like the old 8 bit Atari 800 with its Anti coprocessor. The CPU wrote display lists and data into RAM and Antic used DMA preempting the CPU when it needed to paint a display line.
This was for TV displays. There were some nifty display line and vertical blank interrupts to do things on the fly.
 
Kind of like the old 8 bit Atari 800 with its Anti coprocessor. The CPU wrote display lists and data into RAM and Antic used DMA preempting the CPU when it needed to paint a display line.
This was for TV displays. There were some nifty display line and vertical blank interrupts to do things on the fly.
Wow! This brings back memories. I never realized how poor Apple II graphics were compared to Atari 8 bit computers until decades later when I learned how graphics were done on the Atari computers.
 


 
  • Like
Reactions: IowaLynn
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.