So, apps that incorporate AI LLMs will get hungry for those reources.
Amen.
We are seeing right now, in early 2025, the growing pains of Apple Intelligence.
The rollout is going slowly... and embarrassingly, for Apple.
It's not that Apple does not have good SW engineers. They do.
The problem is that Apple declared they wanted to run AI
on device.
And that takes memory.
On my 16GB RAM iMac, if I run Ollama then I am limited to the reduced 7b model versions.
Try to load bigger and it will crash.
Even on Apple Silicon there will be crashes. As in hard Mac crashing. Go check out Alex Ziskind's YouTube channel for examples.
AI is overhyped, for sure, yet that does not mean you're going to be able to avoid it.
Apple has a
5 decades long history of squeezing as much into as little RAM as possible.
For the grandparents wanting to facetime with their grandkids using an iMac the minimum RAM is fine. I often defend that, as most humans on this planet only want a computer to watch a few videos, check some social media site, etc.
But, the people in this forum, in this thread, are not normal people. (My signature line here used to point that out.)
We here are power users, in some large part.
I want to run 70 billion parameter versions of AI models, if I can't run the original model, and that means 128GB of unified memory, or more.
You may not want that, but if Apple Intelligence becomes what Apple keeps projecting they want it to be, then you should double your boot storage size and RAM.