Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, massively so. There is a good reason why nobody does anything like that. Mounting a couple of dies on a larger passive substrate using low-density connector wires is considerably cheaper than manufacturing one large die. In fact, the industry is moving towards using disaggregated stacked dies and high-density connectors (2D/3D packaging) to save costs.



DRAM scaling depends on capacitor density, not transistor density. Not to mention that design goals for processing circuitry and DRAM are very different. Industry leaders in compute (including Nvidia, Apple, AMD, Intel, and others) don’t make their own DRAM - because designing, optimizing, and validating these is an entirely different task. What’s more, DRAM and processors aren’t even made in the same foundries. There is embedded DRAM of course, but it has limited applications and you can’t really build a high-capacity RAM system from it.

Not to say that things might not change in the future. Apple has been experimenting with high-bandwidth RAM multi-chip-module designs for the Vision Pro, for example. They are crazy enough and large enough to pursue their own proprietary high-performance RAM solution.

Interesting. You learn something new every day.
 
  • Like
Reactions: leman
Interesting. You learn something new every day.
A defect that takes out a large die will only take out one of 4 smaller dies that fit into the same area as the one larger die. For a given process there is a die size that gives the lowest price for a system design - go too small and the cost of interconnects outweigh the savings in improved yield will smaller die and go too large and the drop in yield dominates costs. The original version of Moore's Law was about increased yield from process improvements increasing number of gates per die for the lowest cost design.
 
  • Like
Reactions: maflynn
If anything I would argue that it should be cheaper to have it all on one chip. At that point, the RAM should also shrink with higher density 2nm processes.
Interesting. You learn something new every day.

The confidence in the original statement and the subsequent realization that it's way more complicated reminds me a lot of the dunning kruger effect. No offense.

I agree with the OP that people should buy laptops now. I doubt this memory craze is going away anytime soon.
 
The confidence in the original statement and the subsequent realization that it's way more complicated reminds me a lot of the dunning kruger effect. No offense.

Nobody knows everything and there is no shame in voicing one’s thoughts. I think this forum would be a better place if all of us were as open to receiving new information as @Ursadorable
 
DRAM scaling depends on capacitor density, not transistor density.
Yep. DRAM scaling has some interesting issues. Reducing the dimensions of a capacitor reduces how much charge you can store in the capacitor, which (in modern DRAM nodes) makes it very hard to store enough charge in the capacitor to retain data long enough before it has to be refreshed.

The various techniques used to improve capacitance (such as increasing the vertical or Z dimension of the capacitor, even as the X/Y dimensions shrink) have led DRAM manufacturers to pursue materials and various other process steps that are not only unlike high performance logic process nodes, they're incompatible with high performance logic. Forget about the yield issues involved with making one giant supersize die with everything on it; the bigger barrier is that you literally can't manufacture high density DRAM in a cutting edge logic process, or high performance / high density logic in a cutting edge DRAM process.

In recent times, DRAM scaling has slowed a lot relative to logic. It hasn't yet been able to go below 10nm thanks to those fundamental problems with reducing the size of DRAM bit cells.
 
Yep. DRAM scaling has some interesting issues. Reducing the dimensions of a capacitor reduces how much charge you can store in the capacitor, which (in modern DRAM nodes) makes it very hard to store enough charge in the capacitor to retain data long enough before it has to be refreshed.

The various techniques used to improve capacitance (such as increasing the vertical or Z dimension of the capacitor, even as the X/Y dimensions shrink) have led DRAM manufacturers to pursue materials and various other process steps that are not only unlike high performance logic process nodes, they're incompatible with high performance logic. Forget about the yield issues involved with making one giant supersize die with everything on it; the bigger barrier is that you literally can't manufacture high density DRAM in a cutting edge logic process, or high performance / high density logic in a cutting edge DRAM process.

In recent times, DRAM scaling has slowed a lot relative to logic. It hasn't yet been able to go below 10nm thanks to those fundamental problems with reducing the size of DRAM bit cells.

Thank you! It sounds like the cleft between RAM and logic is even larger than I thought.

I'd also be very curious to hear your thoughts regarding ECC on consumer platforms such as Apple Silicon, if that is something you'd be willing to discuss.
 
Last edited:
The various techniques used to improve capacitance (such as increasing the vertical or Z dimension of the capacitor, even as the X/Y dimensions shrink) have led DRAM manufacturers to pursue materials and various other process steps that are not only unlike high performance logic process nodes, they're incompatible with high performance logic. Forget about the yield issues involved with making one giant supersize die with everything on it; the bigger barrier is that you literally can't manufacture high density DRAM in a cutting edge logic process, or high performance / high density logic in a cutting edge DRAM process.
The original DRAM chip, Intel's 1103, required use of an external comparator to read out the 1 versus 0 state and I would imagine that modern DRAM would require similar functionality on chip. The ultimate limit for scaling is the charge of a single electron, though some bright person may be able to find a way to store more than one bit with each electron.
 
Thank you! It sounds like the cleft between RAM and logic is even larger than I thought.

I'd also be very curious to hear your thoughts regarding ECC on consumer platforms such as Apple Silicon, if that is something you'd be willing to discuss.
My thoughts are simply that I'd like Apple to use ECC, but I doubt that they ever will. We got it in Intel Mac Pros when it came along for the ride because Xeon platforms always support it, but Apple's current package-on-package memory strategy is pretty firmly anchored in LPDDRn, and LPDDR makes ECC support awkward at best.

(They do of course support it in the sense that all DDR5 generation RAM has internal ECC. That's a consequence of the capacitor scaling issues I mentioned: in the process nodes being used to make DDR5, the bit cells have gotten so small they've ceased to be as reliable at remembering bits as they used to be, even when the memory is functioning within spec. The DRAM industry decided to use ECC to compensate for this, going forwards. Brings reliability back up to where it needs to be, but takes a bite out of their density scaling improvements since the DRAM chip needs more bits than it exposes as visible storage. The ECC syndrome generation and error check / correction is all performed on-die in the DRAM chip. From a system perspective, you do not get the full RAS benefits of traditional end-to-end ECC - there's less insight into what might be going wrong, you don't get correction for poor bus connections, etc.)
 
My thoughts are simply that I'd like Apple to use ECC, but I doubt that they ever will. We got it in Intel Mac Pros when it came along for the ride because Xeon platforms always support it, but Apple's current package-on-package memory strategy is pretty firmly anchored in LPDDRn, and LPDDR makes ECC support awkward at best.

(They do of course support it in the sense that all DDR5 generation RAM has internal ECC. That's a consequence of the capacitor scaling issues I mentioned: in the process nodes being used to make DDR5, the bit cells have gotten so small they've ceased to be as reliable at remembering bits as they used to be, even when the memory is functioning within spec. The DRAM industry decided to use ECC to compensate for this, going forwards. Brings reliability back up to where it needs to be, but takes a bite out of their density scaling improvements since the DRAM chip needs more bits than it exposes as visible storage. The ECC syndrome generation and error check / correction is all performed on-die in the DRAM chip. From a system perspective, you do not get the full RAS benefits of traditional end-to-end ECC - there's less insight into what might be going wrong, you don't get correction for poor bus connections, etc.)

Thanks you, this aligns with my thoughts too. Interestingly, Apple has several patents regarding error correction and tracking with LPDDR5 (linked below). What do you think are chances that these patents are active in current-gen hardware, and how would one verify it if one wanted to?

 
Thanks you, this aligns with my thoughts too. Interestingly, Apple has several patents regarding error correction and tracking with LPDDR5 (linked below). What do you think are chances that these patents are active in current-gen hardware, and how would one verify it if one wanted to?

There's no way to tell whether they're implemented without someone reverse engineering a reporting mechanism, or Apple documenting an interface to report error events.
 
  • Like
Reactions: leman
Apple already charges $200/8GB for RAM, they should have enough margin covering the price hike.

Apple signs very long-term supply agreements, pre-pays for considerable inventory (the vendors use the pre-payments for capital expenditures necessary to supply Apple), and uses hedge contracts to insulate themselves from price movements. They're well-insulated in the near term, but eventually all of those measures do have end dates and must be renewed, and whenever that is, inevitably we're going to feel it.

I sure am glad I bought my M4 Max Mac Studio with 128GB from the Refurb Store this past summer — $3265 with 2TB SSD and AppleCare Plus.

When the AI bubble blows up, there will be some very interesting acquisition opportunities for a technology company with a large cash cushion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.