Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

What if that 36 GB of RAM capacity comes from 256 bit LPDDR5 memory config, but that 4 GB of RAM comes from HBM2 memory on package to which patents from 2 years ago COULD point?
 
What if that 36 GB of RAM capacity comes from 256 bit LPDDR5 memory config, but that 4 GB of RAM comes from HBM2 memory on package to which patents from 2 years ago COULD point?
Similar to the point I made earlier about ECC the usable memory is only 32 GB in either scenario.

Interesting thought is such a hybrid memory architecture could theoretically have multiple use cases. The additional DRAM could be used for cache and/or ECC.
 
Last edited:
I am skeptical about Apple moving to ECC memory across the family starting with the M3 since it does add cost for what would be a marginal benefit for the majority of users. If I am honest, I am not sure if even the Mac Pro will offer it.

I could see Apple increasing the memory bandwidth from the current 128/256/512 bits to 192/384/768 both to improve performance and to allow higher memory capacities.
 
What a weird set of RAM discussions.

36 GB base SKU was never going to happen and Gurman never even implied it. I don't grok where that came from.

People keep proposing lots of variations of M3 Pro/Max SKUs but there's only going to be a few. Unless Apple goes off and does something entirely different they're designing ONE TRUE CHIP chip: the M3 Max (M1 had more variations, I don't know if "Better" is still happening?, and we didn't get an M2 Ultra yet):
1684337354225.png


For the 96 GB configuration of the MacBookPros. Are there two memory dies in each of those extra-big 4 memory chips?

Unless Apple moves to a 3 memory chip (pro) and 6 memory chip (max) configuration, which seems unlikely, I really don't quite see how the memory is going to work given the rumours. The base SKU on the Pro going up to 18 GB would be nice.

I still see these as the realistic options for SKUs:
1684338994803.png
 
The SoC is the Mx die. The SoC and memory combined into one package is a SiP.

It’s a package, yes, but that in turn sits on another logic board. The Apple Watch is fundamentally different in that the package contains everything — there is no separate board.

On an iPhone/iPad/Mac, things like the 5G modem, WiFi, flash storage, power management, Ethernet, audio sit on the logic board, not the package. The Watch has to contain all that inside its package, which saves space and power but also means its severely constrained in its capabilities.

So it’s a bit pedantic, sure, but I don’t agree that it’s just marketing. The inside looks different.

And of course, on a Mac Pro, things will be different yet again. It’s plausible there will be separate PCIe slots, and perhaps also DIMM RAM slots. The package may not even have RAM at all.
 
I am not sure I understand why? There appear to be techniques to implement ECC with LPDDR5:

Of those, side-band ECC could be implemented into an Mx with "36 GB of memory" if Apple used a 2 x 128 Gb LPDDR5 devices for storage and one 32 Gb LODDR5 for the ECC codes?

The issue lies in you statement "could be implemented". The standard doesn't really support it. Something outside the standard has to be layered on top.

When has Apple shipped a GPU with ECC turned on? ... they haven't. the high end workstation cards .. they opted for it to be off. [ Power Mac G5 ...shipped with non ECC memory. when optional Apple skipped it. ]

LPDDR5 doesn't support side-band ECC. You can burn up bandwidth layering it on top with augments to your memory controller to explicitly do multiple read-writes for every write ... but is Apple going to do that given they really don't like sacrificing GPU bandwidth for ECC ?

You can do it , but that "burn bandwidth" workaround carries an implicit presumption that probably trying to this with a CPU or at least a limited performance iGPU. That isn't what Apple has here.

This is all 'extra stuff' that has to be thrown into likely the same memory controller that has to go into the rest of the M-series. Is Apple going to turn on ECC there? Probably not. It will be extra baggage have to carry around on the other dies.


The previous Intel workstation CPU packages that Apple used all trickled down from the server market. The nominal usages for those dies in the high volume deployments in the server space all had ECC on. Here Apple is coupled to the higher volume market usages where ECC is off. The Mac Pro all by itself isn't likely going to be the 'tail wagging the dog' here. It is going to pick up what the most of the volume sales is 'paying most of the freight for".


Some iterations from now M4 - M6 Apple may pick it up when the Max capacity of Studio and MBP 16" get into > 128GB range. The industry norm historically for last decades or so has been to it was "safe enough" if you had less than 128GB ... Later with LPDDR6/7/8 where there is more bandwidth that they can 'give away' on ECC overhead and the capacities across the whole line up are higher (and relatively much cheaper $/GB ) that's when "could be implemented" will likely cross over into "will be implemented". It is just not soon.


[ P.S. Nvidia implemented overlayed ECC on Grace SoC ( Arm LPDDR5x ) but Grace doesn't have any GPU cores. If look at Nvidia's GPU package where GPU cores are present they haven't done it. I don't think they have done it on their automotive embed stuff either that has iGPU present either. ]


[ edit P.P.S. If Apple was way , way out on bleeding edge of LPDDR5x 8533Mbps RAM then maybe sooner.

8533/6400 is about a 33% increase. So if had to take a 25% ECC overhead hit that still would be a small positive increase ( 8%). In that case, it is still a win/win for bandwidth and ECC.

The problem is that Apple needs bulk, and they likely want it as a cheap as possible prices. That kind of conflicts with hyper bleeding edge. There is somewhat of a memory glut at the moment , but several years ago it didn't really look like there would be one now. ]
 
Last edited:
I am wondering if the LPDDR interface reference for Mx has been misleading us? Micron LPDDR5 chips max out at x64. Mx Pro has two memory packages. So, this suggests the memory bus could only be 128 bit wide. However, Apple documents the a 256 bit wide bus. This leads me to think that Mx Pro is already using some form of HBM memory to achieve the 256 bit memory bus though Apple does not appear to be using the HBM standard which is 1024 bits wide.

HBM3 devices are shipping now.
 
What a weird set of RAM discussions.

36 GB base SKU was never going to happen and Gurman never even implied it. I don't grok where that came from.

People keep proposing lots of variations of M3 Pro/Max SKUs but there's only going to be a few. Unless Apple goes off and does something entirely different they're designing ONE TRUE CHIP chip: the M3 Max (M1 had more variations, I don't know if "Better" is still happening?, and we didn't get an M2 Ultra yet):
View attachment 2203220

For the 96 GB configuration of the MacBookPros. Are there two memory dies in each of those extra-big 4 memory chips?

Unless Apple moves to a 3 memory chip (pro) and 6 memory chip (max) configuration, which seems unlikely, I really don't quite see how the memory is going to work given the rumours. The base SKU on the Pro going up to 18 GB would be nice.

I still see these as the realistic options for SKUs:
View attachment 2203225

40 CPU cores on M3 Ultra.
 
What a weird set of RAM discussions.

36 GB base SKU was never going to happen and Gurman never even implied it. I don't grok where that came from.
As others have pointed out, 18x2 GB of LPDDR5 RAM is entirely possible and currently exists.

SK hynix LPDDR5
Low-power Advancements

As handset brands begin to adopt LPDDR5 as the new standard, SK hynix introduces the LPDDR5 as its main offering with 18GB of capacity and 6,400Mbps in transfer speeds.
So a 36 GB option on an M3 is something that could be done.
 
As others have pointed out, 18x2 GB of LPDDR5 RAM is entirely possible and currently exists.
The 18GB SK hynix device you referenced is only x16. So, to achieve a 256 bit wide bus would require 16 of them. Current Mx Pro only has two memory packages.
 
40 CPU cores on M3 Ultra.

That isn't what the text says:
Apple is planning a "much bigger leap" with its third-generation chips, some of which will be manufactured with TSMC's 3nm process and have up to four dies, which the report says could translate into the chips having up to 40 compute cores.

"four dies" which means that is a rumour of the "M* Quadra" package? 40 is actually less than what I was proposing: 56 general compute cores. The originally rumoured M1 Quadra would have had 40 commute cores.
 
Last edited:
I still see these as the realistic options for SKUs:
View attachment 2203225

Implementing half a E core cluster at N3 doesn't make much sense. Pretty good chance those 6's are really 8's.
And Apple could if they want bin off that whole second 4 core E cluster if want a taller pricing latter.

And the laptop Max probably isn't a good chiplet for the Ultra/Quadra. That second E core cluster gets goofy once start muliplying the dies. You'll have "another" E core cluster on the other die(s). And don't need the die bloat of 4x GPU cluster either once there is more GPU cores on the other die(s). Certainly don't need more than 6 TB sockets. 4 Secure elements. And 4 SSD controllers. It is not in a 4 port laptop anymore. It isn't in any laptops at all.
Need to reuse the core cluster designs to save costs , but the specific die aggregation is a dubious; it doesn't scale well at all.


P.S. Apple could cut that table off at the Ultra where the scaling starts to fall apart, but the quad goes likely doesn't work well. Doubling down on bad scaling only is going to get worse.
 
Implementing half a E core cluster at N3 doesn't make much sense. Pretty good chance those 6's are really 8's.
And Apple could if they want bin off that whole second 4 core E cluster if want a taller pricing latter.

And the laptop Max probably isn't a good chiplet for the Ultra/Quadra. That second E core cluster gets goofy once start muliplying the dies. You'll have "another" E core cluster on the other die(s). And don't need the die bloat of 4x GPU cluster either once there is more GPU cores on the other die(s). Certainly don't need more than 6 TB sockets. 4 Secure elements. And 4 SSD controllers. It is not in a 4 port laptop anymore. It isn't in any laptops at all.
Need to reuse the core cluster designs to save costs , but the specific die aggregation is a dubious; it doesn't scale well at all.
Well, we've already seen the Max "chiplet" used in the Ultra, and every rumour has suggested that Apple is trying to make the Quadra happen with it.

I don't see any problem with scaling all those E-cores. Lots of efficient parallelization is fine. Some of the scaling might be weird, like: Thunderbolt Controllers. But really, who cares. It is way better than Apple trying to make a new SoC for the Mac Pro, which would be a colossal waste of money.
 
That isn't what the text says:


"four dies" which means that is a rumour of the "M* Quadra" package? 40 is actually less than what I was proposing: 56 general compute cores. The originally rumoured M1 Quadra would have had 40 commute cores.
It means: M3, M3 Pro, M3 Max, M3 Ultra.

Exactly the same as M1 series.

And since M3 Pro is rumored to have 16 CPU cores, 8P/8E its logical that M3 Max would have 20 CPU cores: 12P/8E, and M3 Ultra - 40 CPU cores: 24P/16E.
 
I am wondering if the LPDDR interface reference for Mx has been misleading us? Micron LPDDR5 chips max out at x64. Mx Pro has two memory packages. So, this suggests the memory bus could only be 128 bit wide. However, Apple documents the a 256 bit wide bus. This leads me to think that Mx Pro is already using some form of HBM memory to achieve the 256 bit memory bus though Apple does not appear to be using the HBM standard which is 1024 bits wide.

HBM3 devices are shipping now.
It doesn't have HBM2, all of the packages have simple LPDDR5 controller. M2 has 128 bit bus, M2 Pro has 256 bit bus, M3 Max has 512 bit bus of LPDDR5, and M1 Ultra has 1024 bit bus. M2 Ultra also would have got 1024 bit bus of LPDDR5.
 
It means: M3, M3 Pro, M3 Max, M3 Ultra.

Exactly the same as M1 series.
Nope Nope Nope, I can't cope with that parsing.

"M3, M3 Pro, M3 Max, M3 Ultra" are not even dies, they are packages made up of different numbers of dies.

There is zero chance the Ultra is going from 20 cores to 40 cores in two generations and one die shrink.
 
Nope Nope Nope, I can't cope with that parsing.

"M3, M3 Pro, M3 Max, M3 Ultra" are not even dies, they are packages made up of different numbers of dies.

There is zero chance the Ultra is going from 24 cores to 40 cores in one generation.
Yare, yare...

The same article said this: "The report claims that Apple and TSMC plan to manufacture second-generation Apple silicon chips using an enhanced version of TSMC's 5nm process, and the chips will apparently contain two dies, which can allow for more cores. These chips will likely be used in the next MacBook Pro models and other Mac desktops, the report says."

What did end up in MacBook Pro's 14 and 16 inch in M2 series? M2 Pro and M2 Max.

Thats what they say when they say: "dies".

4 dies: M3, M3 Pro, M3 Max, M3 Ultra. Why do you guys complicate simple things?
 
  • Like
Reactions: smulji
And since M3 Pro is rumored to have 16 CPU cores, 8P/8E its logical that M3 Max would have 20 CPU cores:

The Max having more cores than the Pro would be unprecedented.

The M2 Max, in terms of CPU, is the M2 Pro running at a higher clock. The M1 Max is identical to the M1 Pro. (Ignoring the theoretically higher memory bandwidth.)


12P/8E, and M3 Ultra - 40 CPU cores: 24P/16E.

So the M3 Extreme would have 80? That’s possible, I guess. But you’re saying the M3 Ultra has double compared to the M1 Ultra, which seems like quite a leap.
 
  • Like
Reactions: CWallace
It doesn't have HBM2, all of the packages have simple LPDDR5 controller. M2 has 128 bit bus, M2 Pro has 256 bit bus, M3 Max has 512 bit bus of LPDDR5, and M1 Ultra has 1024 bit bus. M2 Ultra also would have got 1024 bit bus of LPDDR5.
What I am suggesting is there are no LPDDR5 chips available that are wide enough to allow those bus widths given the number of device packages we can see (Pro = 2, Max = 4, Ultra = 8). Apple must be using a unique package to make a device that has a 128 bit bus width suitable for the Pro, Max and Ultra. Apple's memory package does not meet the official JEDEC HBM specification which has a width of 1024 bits and requires a silicon interposer. Their own "custom" HBM like package avoids the silicon interposer which reduces cost. I found a reference where Apple did disclose that they use an Apple designed package for their memory so that explains how they achieve a 128 bit wide bus per package.

So, they are doing something similar to HBM that is more cost effective. Thus, it is possible they could add another die to the stack in the package to provide additional functionality such as cache or ECC.
 
I am wondering if the LPDDR interface reference for Mx has been misleading us? Micron LPDDR5 chips max out at x64. Mx Pro has two memory packages.

Apple uses Semi custom packages. The plain Mn has two packages. The Mn Pro uses two BIGGER packages. What is inside is two packages that the plain Mn uses. The Mn has 128 bit wide aggregate bus. The Pro's is twice as big 256. So need four of what the plain Mn had.

Apple's semicustom thing is to save space. The memory dies are actually smaller than the physical package they are placed inside. if going to have four of them can cut down on some the 'extra trim' at what would have been the adjoining edges and shave off a realtively very small amount of space. However, if you are trying to minimize your laptop motherboard to crave out more space for battery every little tidbit counts.

There are still four RAM die stacks in those two package. The RAM stacks are what really matter.... not the plastic casing on the outside.

The package's memory bus width can be as wide as you want if you make your own Memory package container.

P.S. Apple can probably goose out some incrementally shorter trace lengths also between package that is design the precisely line up what the side-by-side- memory controller clusters come off the die. If into fanatically Pref/Watt ... might pay extra for that also. It will be more expensive than generic off the shelf , but still way cheaper than HBM. What Apple has done is a "poor man's" HBM. They are using mostly standard LPDDR components but put into try to mirror HBM methods. This is still cheaper than a substantively more expensive 2.5D interposer that real HBM would push them into.
 
Last edited:
What I am suggesting is there are no LPDDR5 chips available that are wide enough to allow those bus widths given the number of device packages we can see (Pro = 2, Max = 4, Ultra = 8). Apple must be using a unique package to make a device that has a 128 bit bus width suitable for the Pro, Max and Ultra. Apple's memory package does not meet the official JEDEC HBM specification which has a width of 1024 bits and requires a silicon interposer. Their own "custom" HBM like package avoids the silicon interposer which reduces cost. I found a reference where Apple did disclose that they use an Apple designed package for their memory so that explains how they achieve a 128 bit wide bus per package.

So, they are doing something similar to HBM that is more cost effective. Thus, it is possible they could add another die to the stack in the package to provide additional functionality such as cache or ECC.
No, they don't. M2 package has 2 memory chips. M2 Chip has 128 bit bus. 2x64=128. M2 Pro has 256 bit bus, and M2 pro package has 4 memory chips. 4x64 = 256.

And there are 64 bit memory chips from Micron widely available(those are actually which Apple uses).
 
Apple uses Semi custom packages. The plain Mn has two packages. The Mn Pro uses two BIGGER packages. What is inside is two packages that the plain Mn uses. The Mn has 128 bit wide aggregate bus. The Pro's is twice as big 256. So need four of what the plain Mn had.
Are apple's press images wrong?

Don't they show the 32 GB maximum M2 Pro as having 4 memory packages:
1684345128966.png


and the 96GB maximum M2 Max chip has having 4, twice as large, packages:
1684345177093.png


clearly with multiple memory dies inside them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.