Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AI this, AI that... such an overused term nowadays. AI fatigue is and has set in...
So much so that when I saw a YouTube video going over AI on Shearwater dive computers my first thought was, oh no, they're going to be killing divers. Completely forgot it was about "air integration" modules that monitor your gas consumption, which have been around for well over a decade. 🙄
 
This is exactly the kind of thing that has happened for the last 40 years.

If you don't need a new computer, don't buy a new computer. Wait !
 
That would require a larger phone. Which appears to be the direction Apple is going with their Pro Max line iPhones. Coupled with smaller chips, however I fear the battery size will stay the same, but if larger, expect similar battery life while using said iPhone. Battery technology needs to evolve already
 
An easy way to make it easier to run AI models on the device would be to just add more RAM, like how the OnePlus 13 comes with 24 GB of RAM. But perhaps Apple demands more... courage. :apple:

View attachment 2459066

(seriously, how is Apple putting in so much work just to avoid giving people more RAM in a phone 🤪)

Wow, it’s getting harder and harder for me to stay with Apple knowing it will probably be 6-7 years before the iPhone even comes remotely near these sort of stats. And I know it’s not all about a spec sheet, but at some point, when even cheaper Android phones are years ahead in hardware, and now have better software 🧐

This thing charges to 100% in the time it takes to plug in an iPhone and brush your teeth.




IMG_2925.jpeg
 
This is nuts because I’ve been saying I want Apple to do this on Macs so we can upgrade our RAM again and Apple’s line has been that RAM in the SOC is way more performant

Now I’m hearing it’s not!!?? I feel like I’m taking crazy pills

You are not taking crazy pills. Whoever wrote that article has no idea what they are talking about and MR are parroting after them.

Apple could make their RAM replaceable. But it would be very expensive and likely introduce stability and power consumption issues. Not to mention the space needed. There is a good reason nobody ships wide modular RAM.
 
So if ram is moving off chip, then will the iPhone have to get thicker?

Seems a strange thing to do when an iPhone "thin" is in the offing.
 
I don't think we'll ever be able to swap components in iPhone.
Even for a Mac, right now, you'd have to swap the entire SoC.
If I could swap the entire SoC in a Mac, it would be pretty expensive, but I'd be happy to. But I don't think they'll do that. Even if the design stays the same, sometimes they'll improve cooling methods (i.e. the fan).

And as per the article, I understood before that an SoC gave the ability to have great memory bandwidth, because the memory is closer to the CPU/GPU. I'm not sure how another solution can improve memory bandwidth even more ?
I was wondering as well but article explains that. Number of I/O pins limits bandwith. Like how many gates stadium has to fill and empty it in case it is needed.
 
  • Like
Reactions: PsykX
Such a weird movement. Apple said integrating memories or unified memory can dramatically increase the performance by having zero latency but now, they want to take RAM to a separate place again just like others?

I mean, that way, they can increase the memory size or even make it upgradable but how can they increase the performance then?
 
Basically, Apple is re-inventing PCs. This could be the start of user swappable RAM, SSD and potentially discrete GPUs.

Imagine a Macbook with the base GPU and the option for Apple discrete GPU

For ex [M5 Pro 18-core GPU w/M5 Max 40-core discrete GPU]
Utterly missing the point.
The point here is the claim that PoP will not be used. It's unclear if this is even news.
Standard PoP consists of a design like below. The memory (one or two stacks, one, two or four chips high) is wirebonded to a carrier, and the SoC is mounted under the carrier.

Untitled 2.jpg


If you want to modify this, there are multiple options to do so.
We know that up to the A15 things were done like the above. With A16 things get murky. The only discussion of A16 I've ever found is https://eetimes.itmedia.co.jp/ee/articles/2210/25/news048_2.html
which gives this picture

mm221026_tech05.jpg

This COULD be a version of standard wirebonded PoP. Or it could be something different, with VIAs going through the glass epoxy?
The article is unclear and seems to be most excited about the capacitors mounted between the SoC and the SoC carrier.
Maybe a native Japanese speaker can get something more out of it than the machine translation?

I've seen nothing whatsoever about the packaging of either A17 or A18. Die shots yes, but not the packaging.

Point is, here's nothing here about swappable DRAM. That's as dead as swappable transistors.
The issues are
(a) will the packaged be vertical ("3D") rather than side-by-side ("2.5D"). Probably IMHO.
(b) will the DRAM be packaged using something that's NOT wirebonding. eg the vertical vias used by HBM. That costs a little more, but uses a little less power than wirebonding.
(c) will PiM be present? Apple has a bunch of patents on how to use PiM, and it's the obvious next step in DRAM evolution, so the only question really is will it be 2026? Or earlier?? Or later :-(
 
I thought the M1’s memory was supposed to be superior to off-package RAM so why are they doing this? Is it still Unified Memory? Or am I missing something here?
 
  • Haha
Reactions: Victor Mortimer
Basically, Apple is re-inventing PCs. This could be the start of user swappable RAM, SSD and potentially discrete GPUs.

Imagine a Macbook with the base GPU and the option for Apple discrete GPU

For ex [M5 Pro 18-core GPU w/M5 Max 40-core discrete GPU]

You are reading too much into it. No the SoC in MacBooks isn't going to be broken apart.

The M series SoC is all soldered together for efficient production, high performance and energy efficiency and reducing potential failures. Mac users don't want to go back to the days of GPUs dying like we had with multiple Nvidia and Radeon GPUs.

Apple wants to make sure all these are optimal for user performance and marketing material.

They could get rid of the SoC in Mac Pros and half the memory and GPU separated. On a tower computer there's no need for the efficiencies they created for the laptops.
 
Doesn't packaged memory offer significantly reduced latency though?

Reduced latency but apparently at the cost of bandwidth. So they're victims of the minimization of the SoC size because they can no longer fit the number of pins/lanes to the memory they need for bandwidth, so they have to move the memory away from the SoC to get the bandwidth they need and now they're back to separate memory where they end up with increased latency because of the physical distance.

Sort of like the trade off with spinning HDDs vs flash/SSD but not at all for the same reasons.
 
I thought the M1’s memory was supposed to be superior to off-package RAM so why are they doing this? Is it still Unified Memory? Or am I missing something here?

Still the same architecture, just physically moving the RAM chips from sitting connected to the SoC to a little bit aways from it because they need to add more lanes/pins between the RAM chips and the SoC.
 
Such a weird movement. Apple said integrating memories or unified memory can dramatically increase the performance by having zero latency

They never said that. Apples RAM solution has the same latency as any other LPDDR.

but now, they want to take RAM to a separate place again just like others?

You are putting too much trust in crappy journalism.
 
Doesn't packaged memory offer significantly reduced latency though?

No it doesn’t. Apples RAM have the same latency as any other LPDDR. The latency is determined by the protocol, the site length is just a minuscule factor.

The main advantage of package on package RAM is reduced space, which is why all smartphones use it.
 
Doesn't packaged memory offer significantly reduced latency though?

Yes, that's the tradeoff. Packaged reduces "bandwidth" (number of lanes). They want to increase the number of lanes, so it's a "you win some, you lose some" situation. Increased bandwidth at the risk of increased latency.
 
AI this, AI that... such an overused term nowadays. AI fatigue is and has set in...

It can be exhausting when the term "AI" is used so broadly, even for functionality that is not technically artificial intelligence. Much of that is just a subset of AI called machine learning.

But at the end of the day, AI is not going anywhere, so it's up to each of us to learn to live with it, and try to leverage it in new and creative ways that weren't possible before. The only constant in the world is change.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.