Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

BenRacicot

macrumors member
Original poster
Aug 28, 2010
80
45
Providence, RI
I’ve been researching what the MX/2 GPU cores could be comparable to and have a few big questions.

1. It does seem that app devs are reporting the same max 5.xgb usage.
2. It also seems pretty accepted that the M1 GPU is sharing the LPDDR4 system RAM.
3. Could the new chip (M2?) include 64GB of LPDDR5/6 for their mobile integrated GPU?
4. If this architecture carried over to the next M chip is not a very exciting GPU even at 32 cores.
5. No one I’ve spoken with can compare the M1 GPU cores to anything we know.
6. It does seem that the neural engine may be handling shaders and other GPU focused tasks as well and it also shares the system RAM.
7. Intel seems to be attacking this directly.

TL;DR LPDDR4 may be fine for an MBA but not our future highest-end MBP.

Thoughts?
 
Last edited:
Shared LPDRR5 with 32GB or 64GB with the GPU has never been done before.
Even the Xbox and ps5 use shared GDDR6 of 16GB.

just imagine a MacBook Pro with access to 64GB of memory for the GPU.
Mind blowing
 
Shared LPDRR5 with 32GB or 64GB with the GPU has never been done before.
Even the Xbox and ps5 use shared GDDR6 of 16GB.

just imagine a MacBook Pro with access to 64GB of memory for the GPU.
Mind blowing
Great addition! Thank you. I hadn't even thought of GDDR6! Wow. Do you agree that the current setup with shared LPDDR4 is lame, even at 64GBs?
 
Great addition! Thank you. I hadn't even thought of GDDR6! Wow. Do you agree that the current setup with shared LPDDR4 is lame, even at 64GBs?
Mmm. The M1 was really designed for low-power macs it does not even support 64Gb ram.
I think M1X will answer more questions as it will come to higher end MacBooks.
 
Mmm. The M1 was really designed for low-power macs it does not even support 64Gb ram.
I think M1X will answer more questions as it will come to higher end MacBooks.
Yes, sorry, that's what I was aiming at. As mentioned in #4 I'm hoping for an M2 (not M1x), with a new setup for RAM. Perhaps LPDDR5 or 6.
And we are hearing reports of up to 64GBs of RAM aren't we?
 
Chances are Apple will use separate VRAM. Much like any dedicated gpu.

Did Apple say they would never contract Radeon for Apple silicon CPU?
 
1. It does seem that app devs are reporting the same max 5.xgb usage.

Whut?

2. It also seems pretty accepted that the M1 GPU is sharing the LPDDR4 system RAM.

What do you mean "pretty accepted"? It's a fact, not an opinion.

3. Could the new chip (M2?) include 64GB of LPDDR5/6 for their mobile integrated GPU?

There is no LPDDR6. LPDDR5 is a real possibility.

4. If this architecture carried over to the next M chip is not a very exciting GPU even at 32 cores.

Why not? I'd say that an RTX 3060-3070 performance levels in a thin and light notebook with 20+ hours of life is rather exiting.

5. No one I’ve spoken with can compare the M1 GPU cores to anything we know.

Doesn't sound like you spoke to anyone who knows anything abut this stuff then. Performance charactericists of M1 GPU are well established — its a bit slower than the Nvidia GTX 1650 Max-Q.

6. It does seem that the neural engine may be handling shaders and other GPU focused tasks as well and it also shares the system RAM.

Neural engine does not handle any GPU tasks. Where did you even get this notion? The NPU is a specialized machine learning coprocessor and the GPU is the GPU.

7. Intel seems to be attacking this directly.

Ok...

TL;DR LPDDR4 may be fine for an MBA but not our future highest-end MBP.

Why not? Nvidia is using LPDDR5 for their upcoming supercomputer architecture featuring ultra-fast GPUs. Are you saying they don't know what they are doing?

Thoughts?

I think that your research could have been more rigorous ;)
 
Chances are Apple will use separate VRAM. Much like any dedicated gpu.

Unlikely. Why would they do that and sabotage almost every thing that's great about their system architecture?

Did Apple say they would never contract Radeon for Apple silicon CPU?

They said that they are using GPUs for their own making for their new Macs.
 
Depends on what sort of performance they are targeting I guess. High end would require different RAM. Perhaps GDDR6 or HBRAM for total system RAM?
 
Depends on what sort of performance they are targeting I guess. High end would require different RAM. Perhaps GDDR6 or HBRAM for total system RAM?

High end would require more RAM bandwidth and larger caches. Apple already stated that their plan of attach is wide memory interfaces, and their patents speak of a high number of memory channels with RAM chips being integrated on package. A total of eight LPDDR5 channels will give you 400Gbps of bandwidth. Sixteen channels will give you 800GBps. And so on.
 
  • Like
Reactions: NotTooLate
5. No one I’ve spoken with can compare the M1 GPU cores to anything we know.
Just looked at Metro Exodus on Mac App Store.
Min system req - gpu - Radeon Pro 560
Rec system req - gpu - Radeon RX5700XT / Apple M1

The fact they are comparing the M1 with a 5700 for gaming performance is pretty impressive.
 
I’ve been researching what the MX/2 GPU cores could be comparable to and have a few big questions.

1. It does seem that app devs are reporting the same max 5.xgb usage.
2. It also seems pretty accepted that the M1 GPU is sharing the LPDDR4 system RAM.
3. Could the new chip (M2?) include 64GB of LPDDR5/6 for their mobile integrated GPU?
4. If this architecture carried over to the next M chip is not a very exciting GPU even at 32 cores.
5. No one I’ve spoken with can compare the M1 GPU cores to anything we know.
6. It does seem that the neural engine may be handling shaders and other GPU focused tasks as well and it also shares the system RAM.
7. Intel seems to be attacking this directly.

TL;DR LPDDR4 may be fine for an MBA but not our future highest-end MBP.

Thoughts?

Actually M1 uses LPDDR4X which is faster than LPDDR4: https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested/3

5GB RAM usage limit is for iPad OS not Mac OS. M1 GPU is faster than Radeon RX 560X and sometimes even as fast as GF 1650 (link above).

TFLOPS is not everything but the rumored 128-core GPU would be crazy fast. It would be faster than any GPU on the market, including GF 3090!!

M1 8 GPU cores 2.6 TFLOPS
M? 16 GPU cores 5.2 TFLOPS
M? 32 GPU cores 10.4 TFLOPS
M? 64 GPU cores 20.8 TFLOPS
M? 128 GPU cores 41.6 TFLOPS

Radeon Pro 5700 6.2 TFLOPS
Radeon Pro 5700 XT 7.7 TFLOPS
Radeon Pro Vega II 14.06 TFLOPS
Radeon Pro Vega II Duo 2x14.06 TFLOPS
GF RTX 3060 14.2 TFLOPS
GF RTX 3060 Ti 16.2 TFLOPS
Radeon RX 6800 16.2 TFLOPS
GF RTX 3070 20.3 TFLOPS
Radeon RX 6800 XT 20.7 TFLOPS
Radeon RX 6900 XT 23 TFLOPS
GF RTX 3080 29.8 TFLOPS
GF RTX 3090 35.6 TFLOPS


I extrapolated some gaming benchmarks for M2 and it will be impressive (1260p is for iMac 24"):

- M1 GPU 8 cores: Borderlands 3 1080p Ultra 22 fps - medium 30 fps (1260p 19-26, 1440p 15-23)
- M2 GPU 16 cores 1440p 30-46 fps, 32 cores 1440p 60-92 fps

- M1 GPU 8 cores: Deus Ex: Mankind Divided 1080p Ultra 24 fps (1260p 20, 1440p 18)
- M2 GPU 16 cores 1440p 36 fps, 32 cores 72 fps

- M1 GPU 8 cores: Shadow of the Tomb Raider 1080p Medium 24 fps (1260p 20, 1440p 18)
- M2 GPU 16 cores 1440p 36 fps, 32 cores 72 fps

- M1 GPU 8 cores: Metro Exodus 1080p medium 25-45 fps (1260p 21-38, 1440p 19-35)
- M2 GPU 16 1440p 38-70 fps, 32 cores 76-140 fps

32-core M2 GPU doing 60 fps at 1440p Ultra in Borderlands 3 (via Rosetta 2) will be on par with Radeon 5700 XT, RTX 2070 Super, 2080 or 1080 Ti.

GPU performance often increases proportionally thanks to parallel computing. If everything else in the architecture is the same more cores means that you can render more stuff at the same time. I don't know about all games but many games, especially newer, can take advantage of that. It's not always the case in reality and 4x more cores in theory doesn't always mean 4x the performance, but we can always hope when we're guessing, especially when M1 GPU already has exceeded our expectations. :)

We know that M1 with 8-core GPU at 10W can perform as good as other GPUs with much higher TDP. So a M2 with 32-core GPU at 40W could perform as the 2070 Super at 200W. I used the benchmarks in the videos below where M1 gets 22 fps at 1080p ultra in BL3 built-in benchmark and about 30 in game play. M2 32-core GPU would manage around 60 at 1440p Ultra while 2070 Super manages 56-66 at the same settings. I don't even take into account that M2 may have faster CPU or higher clocked GPU and LPDDR5 or other new benefits. It will be very exiting to see what Apple can come up with. :)

 
Neural engine does not handle any GPU tasks. Where did you even get this notion? The NPU is a specialized machine learning coprocessor and the GPU is the GPU.
I believe we can count on GPU ops (Metal) running on the GPU. However,

If possible, Core ML will run the entire model on the (neural engine). However, it will switch to another processor when it encounters an unsupported layer. Even if Core ML could theoretically run the second part of the model on the GPU, it might actually decide to use the CPU.

Hence, UMA is vital for supporting a flexible, efficient, forward-looking architecture.
 
  • Like
Reactions: SamRyouji
Hence, UMA is vital for supporting a flexible, efficient, forward-looking architecture.

Precisely! Unified memory and APIs that take into account the entire system state enable Apple Silicon to offer good performance while still being very efficient. Apple chose to implement certain features in a redundant manner (ML can run on the NPU, on the AMV coprocessor, on the GPU, all with slightly different performance characteristics and with a different feature set). It might appear wasteful at first but it gives Apple a lot of flexibility when running these tasks and allows the chip to transparently scale to different requirements.
 
Whut?



What do you mean "pretty accepted"? It's a fact, not an opinion.



There is no LPDDR6. LPDDR5 is a real possibility.



Why not? I'd say that an RTX 3060-3070 performance levels in a thin and light notebook with 20+ hours of life is rather exiting.



Doesn't sound like you spoke to anyone who knows anything abut this stuff then. Performance charactericists of M1 GPU are well established — its a bit slower than the Nvidia GTX 1650 Max-Q.



Neural engine does not handle any GPU tasks. Where did you even get this notion? The NPU is a specialized machine learning coprocessor and the GPU is the GPU.



Ok...



Why not? Nvidia is using LPDDR5 for their upcoming supercomputer architecture featuring ultra-fast GPUs. Are you saying they don't know what they are doing?



I think that your research could have been more rigorous ;)

Thank you. MR is becoming a cesspool of those who either don't understand computing or couldn't do a google search. I wish MR forums were back to the PowerPC glory days where people always had a good TECHNICAL discussion.

Now this whole forum is "Is iT SafE tO cHarGe My LapTOp OveRnIgHT???"
 
I can see the GPU on M1X and some other high end Macs to use it's own VRAM pool. Curious to see how they will move forward with those systems.
 
I can see the GPU on M1X and some other high end Macs to use it's own VRAM pool.

To which end? Why do you think this would be useful and how would Apple reconcile separate VRAM with their Apple Silicon programming model that revolves around the notion of unified memory?
 
  • Like
Reactions: Ruftzooi
To which end? Why do you think this would be useful and how would Apple reconcile separate VRAM with their Apple Silicon programming model that revolves around the notion of unified memory?

I said "I can see", I still think the unified memory + 32GB or 64GB would be a better way. Personally I would love it to see that way.

Take a look at consoles they use GDDR6 all around in their SOC
 
  • Like
Reactions: BenRacicot
I said "I can see", I still think the unified memory + 32GB or 64GB would be a better way. Personally I would love it to see that way.

The reason why I am asking is because unified memory is pretty much the default assumption across the entire range of Apple Silicon Macs (Apple communication has been fairly clear on this so far). So every time someone suggests that they are going to use separate VRAM I have to ask why that person thinks that Apple would break their elegant design and what kind of benefit that would bring in their opinion.
 
Unified memory is much more interesting if they pull it off. Would be a game changer at 3060/3070 performance.
 
I can see the GPU on M1X and some other high end Macs to use it's own VRAM pool. Curious to see how they will move forward with those systems.
not a chance...last year and this year State of the Union is key of what they are doing with their soft and hard
its own VRAM is dead, now you must think even bigger - that Apple can share in the next 5 years the SSD also , based on their unified hardware system, they were some hints about that in their dev kit key event
The consoles are now here, the next big generation of console will adopt the same system, but that will be after 7-8 years
 
  • Like
Reactions: Chozes
that Apple can share in the next 5 years the SSD also , based on their unified hardware system, they were some hints about that in their dev kit key event

They kind of already do. From what we know, the SSD is connected directly to the M1 internal bus and the on-chip controller emulates NVMe protocol to communicate with the OS. The next logical step is to drop NVMe altogether and just go full custom, potentially exposing the SSD storage as byte-addressable physical RAM in a common address space. This would allow the kernel to map SSD storage directly, eliminating the need for any logical layer and dramatically improving latency. But I have no idea about these things and I don't know whether there are any special requirements that would make such direct mapping approach non-viable.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.