Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Interesting: RAM is now integrated, just as with M-series at 16 or 32GB. Do it like Apple!
Well, it saves a few cents on production, and has the pro-business/anti-consumer bonus of being unrepairable and un-upgradable which should mean customers buy their next device a bit sooner than they otherwise might.
 
  • Like
  • Disagree
Reactions: bergert and drrich2
Well, I look forward to the M4 comparison. It's good that everyone is applying pressure to Apple, who didn't significantly move forwards with M2 or M3. M3 Pro was an especially weak and disappointing iteration. Just ramping up clock speeds each generation can't continue much longer, especially without impacting heat and battery constraints.

Yeah, but Qualcomm is currently a more serious competitor than Intel. (And even that becomes hard to answer once you get to power efficiency.)
 
  • Like
Reactions: Analog Kid
Well, it saves a few cents on production, and has the pro-business/anti-consumer bonus of being unrepairable and un-upgradable which should mean customers buy their next device a bit sooner than they otherwise might.

Yes, but it also improves performance and reliability, and reduces power draw.
 
  • Like
Reactions: Analog Kid
Yes, but it also improves performance and reliability, and reduces power draw.
Careful on the performance claim, I think when that was investigated it was a smidge less than 0.1%... you'd have to check the exact numbers, but it's definitely less performance than you'd get from just using faster RAM than Apple use in a socketed form.

I'll give you reliability, though you might recall me talking about how my soldered RAM in my MacBook Air died, bricking that machine, so I'm not sure it's much of a difference aside from better drop resistance.

Power draw difference is also close to being statistically insignificant too.

Edit: I'm giving soldered RAM reliability only until it breaks. Obviously once a problem does occur it's game over with soldered RAM!

The positives are MASSIVELY outweighed by the negatives. Companies do it for their benefit, not ours, let's not kid ourselves.
 
Last edited:
Yeah, but Qualcomm is currently a more serious competitor than Intel. (And even that becomes hard to answer once you get to power efficiency.)
Let's see the reviews first before judging... you'd hope Intel have a rabbit up their sleeve considering the advantages they've had over the field for decades.
 
The are two chiplets: one for logic (Intel fab) and one for CPU/GPU (TSMC fab) it says here:


No it doesn't.

" ... Intel's new Core Ultra 200V mobile processors, which launched ahead of the IFA conference in Berlin this week, are actually being outsourced to TSMC for manufacturing and constructed using the x86 giant's Foveros 3D packaging tech. ... "



Foveros really isn't the 'logic' chip. It is more so the connectivity chip. The memory controller , PCI, IO, etc. are in the TSMC chips (plural).


" ... Intel's Lunar Lake tiles are not being fabbed using any of their own foundry facilities – a sharp departure from historical precedence, and even the recent Meteor Lake, where the compute tile was made using the Intel 4 process. Instead, both tiles of the disaggregated Lunar Lake are being fabbed over at TSMC, using a mix of TSMC's N3B and N6 processes ..."


And.


Intel-Lunar-Lake-Hot-Chips-2024_Page_03.jpg


Compute Tile and Platform Tile are TSMC.

Foveros layer may have some modest amout of logic to run the networks between the tiles and the external component , but it isn't the old discrete Platfrom Controller chip logic from previous generations. That is in the Platform tile.


Primarily, Intel just is not using TSMC chiplet packaging technology. Given the volume of work Intel needs done it probably wouldn't work with TSMC even if they wanted to (which they don't. As they are in much better shape to grow the packaging business than the logic chip foundary business right now. )



And this part ...

" ...
As CEO Pat Gelsinger noted during Intel's disastrous Q2 earnings call last month, this decision is really a stopgap until its Foundry division can ramp its 20A and 18A process tech with products expected to return home in the 2026 time period. ..."

20A was a stop gap .... but they aren't going to use it.


" ... Surprising news about Intel continues to emerge with the chipmaker vowing to use an external foundry in place of its own 20A process to make the upcoming Arrow Lake processors, ... "



Arrow Lake was nominally targeted at TSMC in the first place. Duplicating the CPU/GPU chip onto 20A was more of a 'status' thing that Intel hoped to throw money at doing to probably both jump start the external Foundary business and also learn more to make 18A more solid. Turns out they just don't have that kind of 'extra' money to throw around anymore. Apparently, nobody major picked up 20A as a external foundary client option. Despite the 'doom and gloom' spun by this and other articles Broadcom stating that 18A isn't high volume ready now is like a 'sky is blue' comment as it wasn't suppose to be high volume ready until 2025 anyway. ( although somewhat likely someone at Intel might has been spinning a story that it might be. Intel has 'pulled forward' 18A dates which is suspect why anyone would bet the farm on that. ). Some folks probably assumed 20A would get most of the 'bugs out' making 18A a less risky choice. [ Intel probably has ramped somewhat on 20A. This is more a 'stop digging a deeper hole' issue for them. They already likely has spent a large amount of money to get 20A ready for high volume. Intel is going to need a really good "dropped 20A to make 18A much better" explanation for their potential customers. ]


Pretty good chance 20A doesn't have major technical problems. However, it likely is pragmatically too expensive. (duplicative effort and really not much of any other user using the process). The foundry business is already underwater in terms of revenue. And the products biz mainly looks good because Intel is pushing more carrying of losses toward the foundry business. ( plus loosing share to AMD and others and having to throw big discounts to hold onto design wins. )



Interesting: RAM is now integrated, just as with M-series at 16 or 32GB. Do it like Apple!

GPU cards were soldering RAM to the logic board LONG before Apple was. This isn't an Apple invention at all.
 
Last edited:
  • Like
Reactions: ProbablyDylan
You weren't around the mhz wars between Intel and AMD back in the Thunderbird, Duron and Athlon days?

AMD was the first to 1ghz and it hit Intel pretty hard and they also had the first x86-64 chip in the consumer market. All the tech sites were making fun of them and saying they were done for...they were the definite underdog for a few years.

Last I was able to know the speed of processor was measuring the Mhz (somehow PowerPC 800MHz was equivalent to intel 1.6GHz, idk how that works). While AMD had some time in the spotlight I never saw a time when AMD was dominant. It was an Intel world up until around 2017 when they got KOed but Apple introducing the M processor, ARM seems to be the future, Nvidia ate the AI and graphics market, and more people today use mobile than PC/laptop where intel does not compete.

They also lost on the cellphone modems chips, which is a real head scratcher for me. How a $100B company that specializes in making CPU's for decades can not build a cellphone modem?

I just checked and AMD is double the market value of Intel 😱
 
Well, it saves a few cents on production, and has the pro-business/anti-consumer bonus of being unrepairable and un-upgradable which should mean customers buy their next device a bit sooner than they otherwise might.

Well, it's also potentially much faster when it's on-chip. It's like having your bookshelf next to your desk, as opposed to the opposite side of the room. Faster to get information from the closer shelf than get up and go to the other one.
 
  • Love
Reactions: mudflap
Well, it's also potentially much faster when it's on-chip. It's like having your bookshelf next to your desk, as opposed to the opposite side of the room. Faster to get information from the closer shelf than get up and go to the other one.
Potentially? Theoretically in a way undemonstrated in consumer technology?
 
As CEO Pat Gelsinger noted during Intel's disastrous Q2 earnings call last month, this decision is really a stopgap until its Foundry division can ramp its 20A and 18A process tech with products expected to return home in the 2026 time period. ..

"The external partner is likely to be Taiwanese silicon supremo TSMC, which is already making the compute tile die for the Lunar Lake Core Ultra 200V mobile processors that launched this week. This move was supposed to be a stopgap until Intel's Foundry division was able to ramp up both the 20A and 18A processes, but it now appears the firm is ditching 20A instead, possibly as a cost-saving measure."

"Yet reports emerged yesterday that Broadcom, which Intel is trying to court as a customer for its foundry biz, had tested wafers produced using the 18A node and rejected them, concluding the manufacturing process is not yet viable for high-volume production."

Sorry @dconstruct60, gives me no joy to report that.
 
Potentially? Theoretically in a way undemonstrated in consumer technology?

I'm not going to bat for Intel, but here's their slide about it from their presentation. Of course, one should wait for reviews, but it's not impossible to imagine. Apple's M series chips take great advantage of on-package memory, no reason Intel would not be able to do the same.

Intel-Lunar-Lake-memory-on-package-1.png
 
I doubt Intel considers Apple M chips competition for gaming. Most of the games there either aren't available on MacOS or are emulated so it would have been very very difficult to include the M3 in that chart.

I was neither talking about gaming in my first post nor expected them to include Macs in their graphs but Intel showed only gaming benchmarks for their GPU so I had to use that to compare the GPUs and as it shows their ”fastest iGPU in the world” is as fast as the base M3 8c GPU.
 
I'm not going to bat for Intel, but here's their slide about it from their presentation. Of course, one should wait for reviews, but it's not impossible to imagine. Apple's M series chips take great advantage of on-package memory, no reason Intel would not be able to do the same.

Intel-Lunar-Lake-memory-on-package-1.png
What real world gains does any of that equate to vs the best socketed stuff? And what kind of RAM sticks were they comparing to to save 250mm2? 😅😅
 
If the bus widths are the same width and memory at the same generation iteration it isn't necessarily 'faster'. Apple is wider than that most generic PC that is roughly keeping up with the same DRAM standards updates. Intel has gone wider here also if I recall correctly.

Multiple DIMMs on a single channel is chasing capacity far more than "faster" throughput on parallel loads.

You’re referring to throughput. Short, predictable, controlled impedance lines without sockets means lower latencies. The difference in latencies is probably not huge, but it's there. It also makes routing a much wider bus more manageable.

The number is only relative to what Intel was doing. Dumping having to cover multiple protocols probably helps also. ( DDR and LPDDR ).

Shorter lines and no sockets, means less contact resistance, less trace capacitance, less I^2R losses, less fCV^2 losses, and lower power drivers.
 
Careful on the performance claim, I think when that was investigated it was a smidge less than 0.1%... you'd have to check the exact numbers, but it's definitely less performance than you'd get from just using faster RAM than Apple use in a socketed form.

It'll depend on the workload, as all these things do, but it's an improvement you get at the same technology level.

Edit: I'm giving soldered RAM reliability only until it breaks. Obviously once a problem does occur it's game over with soldered RAM!

That's right. Better to have something not break than to have it break and have to fix it. I'm not sure what the failure mode was for your soldered RAM and how you're sure that the RAM components specifically, versus anything else, but statistically the reliability improvement in a portable device is significant. I'd be willing to bet that more people have socketed RAM fail than voluntarily want to change their RAM.

Power draw difference is also close to being statistically insignificant too.

This is certainly not true.

The positives are MASSIVELY outweighed by the negatives.

I disagree. I think the balance is massively toward the positive for the vast majority of users.
 
what kind of RAM sticks were they comparing to to save 250mm2? 😅😅

Yeah, that does seem laughably small... They must be accounting for the fact that most RAM sticks are overlapped with other sticks or other logic, so the savings is mostly volume but the cross section isn't that big.
 
It'll depend on the workload, as all these things do, but it's an improvement you get at the same technology level.



That's right. Better to have something not break than to have it break and have to fix it. I'm not sure what the failure mode was for your soldered RAM and how you're sure that the RAM components specifically, versus anything else, but statistically the reliability improvement in a portable device is significant. I'd be willing to bet that more people have socketed RAM fail than voluntarily want to change their RAM.



This is certainly not true.



I disagree. I think the balance is massively toward the positive for the vast majority of users.
Wow, someone's drunk a lot of the Apple cool-aid! Jeeeeeeeeze. 🤦‍♂️ You fell hook line and sinker for the corporate spiel.

Care to say how much of an improvement in performance soldering provides?

Care to offer any specific numbers on battery savings vs socketed?

Care to explain how soldering magically makes RAM substantially less prone to malfunction?


It's easy to tell if it's specifically RAM that failed, by the way, due to some specific beeps on Mac.
 
Wow, someone's drunk a lot of the Apple cool-aid! Jeeeeeeeeze. 🤦‍♂️ You fell hook line and sinker for the corporate spiel.

Apple and Intel both have the same spiel, it seems, and it aligns with basic engineering. I'd say your spiel is the outlier in this case.

Care to say how much of an improvement in performance soldering provides?

Care to offer any specific numbers on battery savings vs socketed?

Look at what I've written above. If you have more specific questions then ask them specifically.

Care to explain how soldering magically makes RAM substantially less prone to malfunction?

Things that move, break. You're relying on dozens of length and impedance matched lines operating at close to 5GHz. It doesn't take much to disrupt that at all.

It's easy to tell if it's specifically RAM that failed, by the way, due to some specific beeps on Mac.

There are specific beeps for RAM chips versus controllers, traces, or power supplies connected to the RAM? There are a lot of failure modes in accessing RAM that can't be solved by swapping DIMMs.
 
  • Disagree
Reactions: ric22
Apple and Intel both have the same spiel, it seems, and it aligns with basic engineering. I'd say your spiel is the outlier in this case.



Look at what I've written above. If you have more specific questions then ask them specifically.



Things that move, break. You're relying on dozens of length and impedance matched lines operating at close to 5GHz. It doesn't take much to disrupt that at all.



There are specific beeps for RAM chips versus controllers, traces, or power supplies connected to the RAM? There are a lot of failure modes in accessing RAM that can't be solved by swapping DIMMs.
Regarding the last part, the diagnosis was made by Apple themselves. 🤷🏼‍♂️

As for the rest, if there's zero evidence you can present that performance of battery are boosted by even 1% then you're wasting our time. I feel second hand embarrassment that you're so happy with unsubstantiated claims of performance gains, regardless of whether the gain is 0.01% or 1%. You'd think you'd be interested to know before arguing the subject in public.

If you think that a well designed RAM slot allows free movement of RAM inside it, you have a funny idea of what they're like.

Frankly, there's no point in us ever communicating again. All the best.
 
Regarding the last part, the diagnosis was made by Apple themselves. 🤷🏼‍♂️

As for the rest, if there's zero evidence you can present that performance of battery are boosted by even 1% then you're wasting our time. I feel second hand embarrassment that you're so happy with unsubstantiated claims of performance gains, regardless of whether the gain is 0.01% or 1%. You'd think you'd be interested to know before arguing the subject in public.

If you think that a well designed RAM slot allows free movement of RAM inside it, you have a funny idea of what they're like.

Frankly, there's no point in us ever communicating again. All the best.

Your response notably lacks any counter arguments and has far less substance behind your claims. You've got me beat on sheer bile though, I'll give you that.
 
Oh come on.
Go on, link to some evidence if you disagree... if we're talking sub 1% gains then who cares? If any company was so confident the improvements made by soldering were of great significance they'd produce like for like comparisons and publish them and brag about said publications.
 
From HP regarding the performance improvements soldered RAM brings.

"Schnell explained that this performance boost might not ever make any difference to consumers. “An increase in bus speed can have a 3-5% system performance improvement, but that improvement may not be noticeable at a customer level because system memory may not be the key limiter in system performance."

It saves space and it's cheaper. They're the reasons for the switch. I'm happy for anyone to post evidence to the contrary, which would be surprising when no manufacturer wants to stick their neck out to show verifiable numbers. Someone can claim something is an improvement if it's only a 0.1% improvement, and they'd be telling the truth. It's telling they never want to say how much a consumer- a pro or an enthusiast- might actually benefit.

Edit: Having had a look at a few expert testimonies, soldering RAM essentially shows performance gains as a theoretical hypothesis only, and not in any real world measurable way.
 
Last edited:
Well, it saves a few cents on production, and has the pro-business/anti-consumer bonus of being unrepairable and un-upgradable which should mean customers buy their next device a bit sooner than they otherwise might.
How would the additional step of integrating RAM on a CPU package actually reduce production cost? That adds a non-trivial extra step in the production process as well as adding the additional cost of the RAM itself. On top of that, this method now complicates inventory, as it multiplies the number of SKUs they have to keep, as each CPU now has multiple variants as each CPU and RAM combination now becomes a separate SKU. This method costs more to manufacture and inventory, not less.

Everything I've read indicates that this CPU and RAM integration is much faster and more energy efficient, though. And the Apple Silicon Macs I have used appear to back up those claims.

To me, it finally makes Apple's lack of RAM upgradability at least have a good reason, versus just soldering RAM on a motherboard (which itself would be quite accurately described by your initial statement, in my opinion.)
 
  • Like
Reactions: Analog Kid
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.