From HP regarding the performance improvements soldered RAM brings.
"Schnell explained that this performance boost might not ever make any difference to consumers. “An increase in bus speed can have a 3-5% system performance improvement, but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."
It saves space and it's cheaper. They're the reasons for the switch. I'm happy for anyone to post evidence to the contrary, which would be surprising when no manufacturer wants to stick their neck out to show verifiable numbers. Someone can claim something is an improvement if it's only a 0.1% improvement, and they'd be telling the truth. It's telling they never want to say how much a consumer- a pro or an enthusiast- might actually benefit.
Edit: Having had a look at a few expert testimonies, soldering RAM essentially shows performance gains as a theoretical hypothesis only, and not in any real world measurable way.
I know you've said we'll not be communicating anymore, 😔, but despite your belligerence it's worth touching on each point for anyone else who cares to understand.
Power:
Intel has published a 40% power savings due to on package RAM. You keep saying nobody is publishing numbers, but there it is. Intel provides media packs and
http://intel.com/performanceindex where they provide support for many of their claims. If you don't find what you're looking for there I suggest you reach out to their PR department and ask them to support their public claim.
I'm not sure why you're so unwilling to accept the physical reality of the situation though. As I said:
Shorter lines and no sockets, means less contact resistance, less trace capacitance, less I^2R losses, less fCV^2 losses, and lower power drivers.
The physics here are pretty clear.
Performance:
You’re referring to throughput. Short, predictable, controlled impedance lines without sockets means lower latencies. The difference in latencies is probably not huge, but it's there. It also makes routing a much wider bus more manageable.
It'll depend on the workload, as all these things do, but it's an improvement you get at the same technology level.
Again, this is a pretty straightforward problem to analyze. When routing to DDR, you need to length match your lines. The wider the bus, the harder that is and it means all lines must be the same length as the longest line. Every 15mm of length you add to the line you're forced to make longest, adds a full clock of access latency at LPDDR5x-8533. That's bigger than I'd anticipated, actually. If your RAM is just 15cm from your processor, 6 inches,
you've added 10 clocks of access delay.
This, by the way, is the reason that DDR bursts. It's not truly random access, you typically get the data you want as part of a full page of data because it's a better bet to risk sending a bunch of useless data than to wait for each word to be individually requested through all that and other sources of latency. All that extra data traffic just burns more power and your bursts are limited to depths of 16 in DDR5, so you're not going to hide it all and it won't help you in truly random reads or writes. Like I said, workflow dependent.
Using a typical fly-by wiring for the command lines, a standard DIMM adds almost 15cm of routing length just on the DIMM itself. An SO-DIMM manages to cut that in half. But in both cases that's just the module, before even accounting for the routing needed to reach the module and delays through the connector. The on package memory is not only closer to the controller, but also configured as dice stacked vertically rather than full packages aligned laterally. The height of a die is far less than the length of a package and pins.
An increase in bus speed can have a 3-5% system performance improvement
That's quite significant for a single design change. There has been many generations of Intel processors where the
aggregate improvement is 10%.
but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."
It wasn't a quote from HP and you're omitting the punchline: " if the storage SSD is the system bottleneck, then a small improvement in memory performance won’t matter." That's just a dumb thing for Schnell from Dell to say. He may as well have said "it may not be noticeable at a customer level because the user might be asleep or looking out the window when the operation happens." He's probably trying to make the point that your mileage may vary depending on your workflow and other bottlenecks in the system, but that goes without saying (a faster CPU doesn't help if you're memory constrained, a faster SSD doesn't help if your algorithm takes too long to process the data, etc, etc). It just has the unfortunate consequence of leading people like you to think that "experts have divided opinions".
3-5% is a significant improvement without having to wait for a new RAM technology and it's based on solid physical grounds.
Reliability:
If you think that a well designed RAM slot allows free movement of RAM inside it, you have a funny idea of what they're like.
If you think long plastic rails, springs, clips, pressure contacts and hand assembled removable boards mounted at one edge don't move, you have a funny idea of what they're like.
Cost:
Who exactly do you think is saving any money here? Intel? They now need to buy memory they didn't have to before. The OEM? They now have a single source they're forced to buy their memory from and need to guess the demand for various memory configurations when ordering parts ahead of a scheduled build rather than installing modules on its way out the door. There's more to the cost of a product than the raw material costs-- inventory management and delivery delays play a role, as do competition in pricing and multiple sources to mitigate shortages from any one supplier.
From the same article you cherry picked the earlier quote from, Schnell from Dell says "There is no effect on manufacturing as the DRAM packages are standard. There is no impact on the retail price of our laptops". That's followed by a quote from Lenovo: "Both soldered and socketed RAM designs are now quite mature. As a result, we see no impact on the manufacturing process and, therefore, the cost to the consumer." Which leads the author to the conclusion: "It seems that, although beneficial to the manufacturing process, the cost and the ease of production don’t play a big part in companies choosing soldered RAM."
And those quotes were before Intel became their sole source of memory.
Size:
what kind of RAM sticks were they comparing to to save 250mm2? 😅😅
Here, you and I agree. It's a laughably small estimate. An SO-DIMM, not including the actual socket itself, is 2030mm^2. The entire Lunar Lake package, RAM, and Foveros stack all included, is 742mm^2. That looks like more than 1300mm^2 of board space reduction and much more than that once routing and sockets and thermals are accounted for. That's before even looking at the vertical dimension.
So, why are laptops moving to soldered RAM? Because it produces a smaller, faster, more reliable and lower power product and, who knows, might be a little cheaper too.