Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Investors were questioning the point of the acquisition when it happened.
Citation needed. As I do not recall any significant questioning.

But I’m not a shareholder so I may not have been tuned into that story.
 
Last edited:
Most DRAM is a commodity, yes. But Apple has an unusual memory architecture. Does commodity DRAM offer the ideal design to work with it? Or might Apple benefit from a customized in-house design—one optimized to integrate synergistically with that architecture? Perhaps a specialized variant of HBM memory, say, or maybe an entirely different design.

Sure, it would likely cost more for Apple to do this than to buy off-the-shelf RAM. But Apple, with its premium prices, has the flexibility to pay for premium components, if they provide a significant performance boost.

The "barrier to entry" into the market would only apply if Apple wanted to compete on price/performance in producing commodity RAM that it then planned to sell into the general market. But neither of those would apply to the RAM Apple would build for itself (it's not going to be commodity RAM, and they're not going to be competing for RAM sales, b/c they're not going to be supplying it to anyone but themselves).

1. Apple doesn't have unusual Memory Architecture. There is nothing new about UMA. There are used in console, iGPU and even good old SGI.

2. The memory used in M1 is LPDDR4X. A commodity part. A more expensive part in the DRAM spectrum, but still a commodity.
The DRAM does not lives inside the SoC. That is a misreporting from many media. It doesn't. It sits "next" to the SoC.

3. The last point would be the same as Apple buying TSMC. Those cost requires economy of scale to amortised.

4. And if Apple really wanted "significant performance boost", you could argue they buy the most expensive RAM, SRAM.
 
  • Like
Reactions: DCIFRTHS
1. Apple doesn't have unusual Memory Architecture. There is nothing new about UMA. There are used in console, iGPU and even good old SGI.

2. The memory used in M1 is LPDDR4X. A commodity part. A more expensive part in the DRAM spectrum, but still a commodity.
The DRAM does not lives inside the SoC. That is a misreporting from many media. It doesn't. It sits "next" to the SoC.

3. The last point would be the same as Apple buying TSMC. Those cost requires economy of scale to amortised.

4. And if Apple really wanted "significant performance boost", you could argue they buy the most expensive RAM, SRAM.
1. You could say that its memory architecture is similar to those of other unified architechture (such as CPUs with iGPUs) at a macro level. But if you drill down, it appears there are important differences at lower levels.

For instance, as I understand it, unified memory for iGPU's simply means that the RAM can be partitioned into a CPU block and a GPU block. By contrast, it appears what Apple has done is to configure the RAM as a single block that both the CPU and GPU can address. Thus data doesn't have to be passed between the CPU partition and the GPU partition. See: https://forums.macrumors.com/thread...st.2272669/page-4?post=29332812#post-29332812

And there are probably several other unique aspects to Apple's unified memory architecture that aren't yet publicly know. Thus, to give a rough analogy, saying Apple doesn't have usual memory architechture because other systems also use unified memory seems a bit like saying that Apple's M1 CPU architecture isn't unusual because many other systems use the ARM ISA. Yes, the ISA's are the same, but (as I'm sure you know) at a lower (microarchitecture) level, Apple's chips are unique.

2. "The memory used in M1 is LPDDR4X. A commodity part." This seems to be begging the question. Sure, they use off-the-shelf memory now, but given that they don't have their own in-house memory, what else are they going to use? Think of it this way: For many years, Apple used only off-the-shelf Intel CPU's (with occasional minor customization) in its Macs. By your argument, the fact that they did this meant they wouldn't shift to in-house custom jobs. Yet they did. [Yes the Intel CPU's are off-the-shelf rather than commodity, but the same principle applies.]

"The DRAM does not lives inside the SoC". Your language doesn't correspond to Apple's own definition. Apple defines the SoC as a package that includes the RAM (see: https://www.apple.com/mac/m1/). However, while the RAM is part of the SoC package, what you may be thinking of is that it is on a separate die within that package See: https://forums.macrumors.com/thread...o.2267256/page-31?post=29220991#post-29220991 Also, not sure why you're bringing this up, since I never mentioned this point in the post to which you're responding.

3. Nope, it wouldn't be. It would be Apple doing the same thing with RAM as it does with the M1 CPU.

4. Apple pays for a performance boost if it's cost effective, and works within their size and power constraints. If custom RAM architecture could be more performant because it's designed to work specifically with Apple's memory architecture, its increase performance could result from a more synergistic design rather than more expensive fabrication. Yes, they're not going to get the economies of scale from which commodity DRAM currently benefits, but it's Apple, so they will have significant economies of scale nonetheless.
 
Last edited:
1. You could say that its memory architecture is similar to those of other unified architechture (such as CPUs with iGPUs) at a macro level. But if you drill down, it appears there are important differences at lower levels.

For instance, as I understand it, unified memory for iGPU's simply means that the RAM can be partitioned into a CPU block and a GPU block. By contrast, it appears what Apple has done is to configure the RAM as a single block that both the CPU and GPU can address. Thus data doesn't have to be passed between the CPU partition and the GPU partition. See: https://forums.macrumors.com/thread...st.2272669/page-4?post=29332812#post-29332812

And there are probably several other unique aspects to Apple's unified memory architecture that aren't yet publicly know. Thus, to give a rough analogy, saying Apple doesn't have usual memory architechture because other systems also use unified memory seems a bit like saying that Apple's M1 CPU architecture isn't unusual because many other systems use the ARM ISA. Yes, the ISA's are the same, but (as I'm sure you know) at a lower (microarchitecture) level, Apple's chips are unique.

2. "The memory used in M1 is LPDDR4X. A commodity part." This seems to be begging the question. Sure, they use off-the-shelf memory now, but given that they don't have their own in-house memory, what else are they going to use? Think of it this way: For many years, Apple used only off-the-shelf Intel CPU's (with occasional minor customization) in its Macs. By your argument, the fact that they did this meant they wouldn't shift to in-house custom jobs. Yet they did. [Yes the Intel CPU's are off-the-shelf rather than commodity, but the same principle applies.]

"The DRAM does not lives inside the SoC". Your language doesn't correspond to Apple's own definition. Apple defines the SoC as a package that includes the RAM (see: https://www.apple.com/mac/m1/). However, while the RAM is part of the SoC package, what you may be thinking of is that it is on a separate die within that package See: https://forums.macrumors.com/thread...o.2267256/page-31?post=29220991#post-29220991 Also, not sure why you're bringing this up, since I never mentioned this point in the post to which you're responding.

3. Nope, it wouldn't be. It would be Apple doing the same thing with RAM as it does with the M1 CPU.

4. Apple pays for a performance boost if it's cost effective, and works within their size and power constraints. If custom RAM architecture could be more performant because it's designed to work specifically with Apple's memory architecture, its increase performance could result from a more synergistic design rather than more expensive fabrication. Yes, they're not going to get the economies of scale from which commodity DRAM currently benefits, but it's Apple, so they will have significant economies of scale nonetheless.

Little to be gained from customizing the DRAM, at least architecturally. What Apple will do is continue to put dram within the SOC package for its high end products, but that DRAM will act as system cache. This achieves most or all of the benefit of their UMA architecture for nearly every workload. The only time a penalty would be paid is when the working set is larger than that cache, AND when it is being accessed in an unpredictable manner (so that the cache prefetch algorithm can’t successfully preload the cache).
 
Little to be gained from customizing the DRAM, at least architecturally. What Apple will do is continue to put dram within the SOC package for its high end products, but that DRAM will act as system cache. This achieves most or all of the benefit of their UMA architecture for nearly every workload. The only time a penalty would be paid is when the working set is larger than that cache, AND when it is being accessed in an unpredictable manner (so that the cache prefetch algorithm can’t successfully preload the cache).
What about advantages not particular to Apple's specific architecture? E.g., you mentioned Apple might develop its own light sensor chips. Those chips wouldn't be particular to Apple's various camera architectures, right? Given this, what is the difference between Apple investing R&D dollars to develop better light sensors, vs. its investing R&D dollars to develop better RAM? Granted, these are very different technologies yet, in either case, you're trying to do better than industries that already produce highly optimized products in these areas. So what is different about sensor technology that would make Apple's R&D dollars more likely to yield an improvement over existing tech than would be the case for RAM?

As an aside, I've not liked Apple's approach to cameras or their sensors. For the phones, historically they've use some of the smallest sensors in the industry:


1607917042427.png
 
Last edited:
What about advantages not particular to Apple's specific architecture? E.g., you mentioned Apple might develop its own light sensor chips. Those chips wouldn't be particular to Apple's various camera architectures, right? Given this, what is the difference between Apple investing R&D dollars to develop better light sensors, vs. its investing R&D dollars to develop better RAM? Granted, these are very different technologies yet, in either case, you're trying to do better than industries that already produce highly optimized products in these areas. So what is different about sensor technology that would make Apple's R&D dollars more likely to yield an improvement over existing tech than would be the case for RAM?

As an aside, I've not liked Apple's approach to camera sensors. For the phones, they use some of the smallest in the industry. The Huawei Mate 40 (9.1 mm thick; no camera bump) has a better camera and larger sensor than that on the iPhone Pro Max 12, even though iPhone, with its 2.79 mm camera bump, allows for more total camera thickness (7.39 mm + 2.79 mm = 10.18 mm) than the Huawei's 9.1 mm.

Why wouldn’t their own sensors be specific to their own camera architecture? After all, computational photography is a big deal to Apple. And there are huge differences between different camera sensors and sensor architectures - lots of room for customization. Unlike DRAM. All DRAM is fundamentally the same; the differences are in how it is partitioned into blocks and what the bus interface looks like. There is little room for specialization there. Compare that to sensors, where you can be backside lit or not, different pixel sizes, different pixel spacing, different read-out circuits, different filter technology and patterns, different anti-aliasing tricks, foveon sensors, microlenses, etc. etc. Apple could do all sorts of custom things, even going as far as to building image processing technology onto the sensor itself.

DRAM? Very little room for improvement of the DRAM cells themselves (they’re just a capacitor and a transistor), and anything else they want they can get from vendors.
 
Why wouldn’t their own sensors be specific to their own camera architecture? ... you can be backside lit or not, different pixel sizes, different pixel spacing, different read-out circuits, different filter technology and patterns, different anti-aliasing tricks, foveon sensors...
Sure, all those sensor variations exist. And if Apple came up with a more performant sensor for, say, smartphones, that would be of benefit to them. But wouldn't such a more performant sensor technology be of benefit to everyone making smartphone cameras, and who is interested in the same optimizations, regardless of what their camera architecture is? [Of course, they'd need the right sensor dimensions for their focal length and aperture, but that's trivial.] I.e., what is unique about Apple's lens design, and other aspects of its camera architecture, that that would make a specific customized sensor benefit Apple's cameras, but not anyone else's (which is what I meant by 'advantages particular to Apple's specific camera architecture')?

DRAM? Very little room for improvement of the DRAM cells themselves (they’re just a capacitor and a transistor), and anything else they want they can get from vendors.
Maybe at the level of the individual DRAM cells but, as you know, there's different ways those DRAM chips can be configured, e.g., DDR, HBM, and HMC. Though it sounds like you're saying if Apple wanted HBM or HMC, it could readily get whatever it wants from the vendors—and that Apple is unlikely to invest in developing new memory technology (e.g., MRAM and ReRAM), because that research is too basic for them.

But why wouldn't the same apply to camera sensors, i.e., why couldn't it get whatever it wanted from the sensor mfrs. like, say, Sony, etc.? I'm guessing your answer would be that the level of customization available for sensors is much more than that involved in various RAM types.
 
Last edited:
Sure, all those sensor variations exist. And if Apple came up with a more performant sensor for, say, smartphones, that would be of benefit to them. But wouldn't such a more performant sensor technology be of benefit to everyone making smartphone cameras, and who is interested in the same optimizations, regardless of what their camera architecture is? [Of course, they'd need the right sensor dimensions for their focal length and aperture, but that's trivial.] I.e., what is unique about Apple's lens design, and other aspects of its camera architecture, that that would make a specific customized sensor benefit Apple's cameras, but not anyone else's (which is what I meant by 'advantages particular to Apple's specific camera architecture')?


Maybe at the level of the individual DRAM cells but, as you know, there's different ways those DRAM chips can be configured, e.g., DDR, HBM, and HMC. Though it sounds like you're saying if Apple wanted HBM or HMC, it could readily get whatever it wants from the vendors. That makes sense, given the expertise the vendors have in this area. But why wouldn't the same apply to camera sensors, i.e., why couldn't it get whatever it wanted from, say, Sony, etc.? I'm guessing your answer would be that the level customization available for sensors is much more than that involved in various RAM types.

Apple has very specific ideas about how cameras should work, which is why their’s work differently than, for example, Samsung’s and google’s. They have their own ideas about color rendition, the relative contributions of optical enhancements vs. computational enhancements, etc. They also have their own autofocus techniques, etc. In the end, sensors provide so much more of a possibility of differentiating their product vs. the competition than could ever be accomplished with any sort of custom DRAM.

As for just asking Sony for what they want, it’s a possibility, but so was “just ask qualcomm to make the mobile CPU they want.” At a certain point the advantages to doing it yourself win: 1) you control your own roadmap. You don’t have to worry about someone else executing. 2) it’s harder for your advancements to end up in your competition’s products. 3) Third party vendors may not really want to make major changes to their products just for one customer, no matter how big. 4) bringing development in-house leads to synergies - the people designing the lens, the sensor, the CPU with its image processing circuits, etc. can all work together. Stuff that’s currently on the M1 could be moved to the sensor, for example, to pre-process stuff continuously at the source. Multiple sensors could even be configured onto one big die that has exactly the form-factor to fit the allocated space in the phone, with digital logic, RAM, etc. filling the gaps between the sensors.

The camera is such an important feature on iphones, that I really see apple doing this soon. As for DRAM, 99% of the die is DRAM cells. All that changes is some interface logic and I/O. And there are so many dram manufacturers that you can easily find one or two that are willing to make whatever you want if you supply them with the circuits. Whereas there aren’t that many sensor suppliers.
 
Apple only uses premium components as a last resort, unless visible/touchable
As for modems, don’t fool yourself into believing M1 is indicative of what they can do in this area. Patent system is a sham, just look at the BS Apple gets granted on a regular basis with no detail on how to actualise, very broad. With QC’s portfolio of patents, Apple will have to pony up for licences.
They would be better off ditching 5g, it’s a joke, and optimising 4g

That doesn't sound very Appl-ey

Deliberately stinging on "premium" components, for lessor inferior ones, just to save money yourself.. Any other company, I would have said, absolutely, but not Apple...

If customers come first as higher standards, and they've proved that in their Mac's trackpads etc, where less competitive other companies would perish sooner on similar components, then that gives the thunbs up to me.

After all that is why we pay high prices after all. To pocket it yourself for the greater of draggging customers through the dirt, just doesn't settle well for or something that's premium.

In fact, of all the i.devices i have had over they years, Mac's (desktops and laptops), not one of them needed repairs..

Only exception was a early model 2011 Macbook Pro. No iPhone outlived it's life.
 
1. You could say that its memory architecture is similar to those of other unified architechture (such as CPUs with iGPUs) at a macro level. But if you drill down, it appears there are important differences at lower levels.

For instance, as I understand it, unified memory for iGPU's simply means that the RAM can be partitioned into a CPU block and a GPU block. By contrast, it appears what Apple has done is to configure the RAM as a single block that both the CPU and GPU can address. Thus data doesn't have to be passed between the CPU partition and the GPU partition. See: https://forums.macrumors.com/thread...st.2272669/page-4?post=29332812#post-29332812

And there are probably several other unique aspects to Apple's unified memory architecture that aren't yet publicly know. Thus, to give a rough analogy, saying Apple doesn't have usual memory architechture because other systems also use unified memory seems a bit like saying that Apple's M1 CPU architecture isn't unusual because many other systems use the ARM ISA. Yes, the ISA's are the same, but (as I'm sure you know) at a lower (microarchitecture) level, Apple's chips are unique.

2. "The memory used in M1 is LPDDR4X. A commodity part." This seems to be begging the question. Sure, they use off-the-shelf memory now, but given that they don't have their own in-house memory, what else are they going to use? Think of it this way: For many years, Apple used only off-the-shelf Intel CPU's (with occasional minor customization) in its Macs. By your argument, the fact that they did this meant they wouldn't shift to in-house custom jobs. Yet they did. [Yes the Intel CPU's are off-the-shelf rather than commodity, but the same principle applies.]

"The DRAM does not lives inside the SoC". Your language doesn't correspond to Apple's own definition. Apple defines the SoC as a package that includes the RAM (see: https://www.apple.com/mac/m1/). However, while the RAM is part of the SoC package, what you may be thinking of is that it is on a separate die within that package See: https://forums.macrumors.com/thread...o.2267256/page-31?post=29220991#post-29220991 Also, not sure why you're bringing this up, since I never mentioned this point in the post to which you're responding.

3. Nope, it wouldn't be. It would be Apple doing the same thing with RAM as it does with the M1 CPU.

4. Apple pays for a performance boost if it's cost effective, and works within their size and power constraints. If custom RAM architecture could be more performant because it's designed to work specifically with Apple's memory architecture, its increase performance could result from a more synergistic design rather than more expensive fabrication. Yes, they're not going to get the economies of scale from which commodity DRAM currently benefits, but it's Apple, so they will have significant economies of scale nonetheless.

1. I am well aware of Zero Memory Copy and it is nothing new. They exist in both current Intel and AMD iGPU on PC ( no one cares to implement them though due to dGPU being the norm ) and are in used in console.

>Nope, it wouldn't be. It would be Apple doing the same thing with RAM as it does with the M1 CPU.

Again. You are suggesting Apple buy and build their own custom DRAM, that is not what the M1 does. Apple custom design on top of ARM, and built with TSMC. Using your analogy again, it would be Apple custom spec a HBM and built with Samsung. There are higher Bandwidth variant of Memory, Lower Latency of Memory on the market. None of these requires a special RAM R&D invested and manufactured from solely Apple.

I also dont think you quite grasp the economy of scale between custom ARM ISA and Custom Memory. Because if I were to use your custom Memory analogy, Apple would be going all the way to a New CPU ISA, and not building on top of ARM ISA, which has an enormous ecosystem of software library around it, partly shared by Android and now the Server Market. And building a New ISA with custom software ecosystem would be a task that even Apple would not take.

> Apple pays for a performance boost if it's cost effective,

Precisely because it isn't cost effective.

I think Camier already gave you enough details and answer. It is not technical, it is the business side of things that doesn't make any financial / economical rational.
 
1. I am well aware of Zero Memory Copy and it is nothing new. They exist in both current Intel and AMD iGPU on PC ( no one cares to implement them though due to dGPU being the norm ) and are in used in console.

>Nope, it wouldn't be. It would be Apple doing the same thing with RAM as it does with the M1 CPU.

Again. You are suggesting Apple buy and build their own custom DRAM, that is not what the M1 does. Apple custom design on top of ARM, and built with TSMC. Using your analogy again, it would be Apple custom spec a HBM and built with Samsung. There are higher Bandwidth variant of Memory, Lower Latency of Memory on the market. None of these requires a special RAM R&D invested and manufactured from solely Apple.

I also dont think you quite grasp the economy of scale between custom ARM ISA and Custom Memory. Because if I were to use your custom Memory analogy, Apple would be going all the way to a New CPU ISA, and not building on top of ARM ISA, which has an enormous ecosystem of software library around it, partly shared by Android and now the Server Market. And building a New ISA with custom software ecosystem would be a task that even Apple would not take.

> Apple pays for a performance boost if it's cost effective,

Precisely because it isn't cost effective.

I think Camier already gave you enough details and answer. It is not technical, it is the business side of things that doesn't make any financial / economical rational.

“On top of ARM” is a bit misleading. The only thing apple gets from Arm is the instruction set. Of course you know this, but some may get the wrong idea.
 
  • Like
Reactions: ksec
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.