Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Looking at my 48GB machine right now, the top 10 processes are using 17GB RAM, and this only covers one tab I have open in Safari - along with two more in Firefox.

how does a safari tab and 2 firefox ones reach 17GB ram usage? what websites are these?
does this mean you downloaded 17GB from those sites? thats insane
 
  • Like
Reactions: blob.DK
and maybe in the future i will use llm's locally and develop ML
I wasn’t interested in LLM / ML until I got my M4 Pro 24GB MBP last year.

The more I got interested in LLM, the more 24GB became a bottleneck and I looked for something better. I promised myself if the 2025 Studio came out with 128GB RAM, it’d be a day one purchase. So I got one.

Several months later, I find the 128GB starting to pinch. Don’t get me wrong, you can do a *lot* with it, but I don’t see it lasting 5-6 years before I’ll be replacing it with something that has more RAM.

I’ve just watched a YouTube video with someone running Kimi K2 locally, squashed down to Q3 on a 512GB Mac Studio, and he’s pushing close to 510GB RAM use - LLM response terminates due to insufficient RAM. And that’s today. Where will we be in 2 years?

If you want to do LLMs/ML, seriously look at the RAM situation. With the developments going on, 128GB isn’t going to be enough for long. With 36GB, manage your expectations and you’ll be fine for now, but don’t be surprised if you find yourself craving for more RAM real soon.

Also, running LLMs pushes the GPU, making the fans run due to heat and, if on a MacBook, draining the battery in no time. Points to consider.
 
  • Like
Reactions: gilby101 and rb2112
When working with my usual number of Safari tabs open (often more than 20), I'm very glad I've got the 48GB. On my MacBook Air, which has 24GB RAM, I have to be careful which tabs I keep open as you can easily bump into that limit and things start slowing down when it resorts to using the SSD for caching.

IMHO As it's no longer upgradable, I'd get the largest amount of RAM you can as future OS's are unlikely to need less, but more RAM.
I’ve done the same.

OP, are you planning to use your Mac for the following tasks:
- Full-stack development
- Developing large iOS apps
- Working extensively with LLMs, AI, and machine learning
- Game development, including using Roblox Studio and rendering software

If you’re planning to engage in any of these activities, you’ll require more RAM.
 
the reality is that 5-6 years is an eternity for a laptop. no need to max out the system to 'future proof'

youll want to sell and upgrade your machine in 3-4 if not before that.

youd be better off with incremental upgrades as tech progresses and as you feel out what you actually require as a student.

top ramen gets old fast! :)
 
  • Haha
Reactions: goldmac2006
For your (initial) criteria, I would say "Yes!"

To the quoted, you're going to eventually find that you are basically chasing your tail...

...the basic consumer technology available now will inevitably be eclipsed by that which we will enjoy in 2030.

Think "Two Years" before you invest in six.
I'll get a lot of hate for this but I use an LLM and Xcode locally (for studying Swift) and I have 8GB of RAM. I get by just fine.
 
I'll get a lot of hate for this but I use an LLM and Xcode locally (for studying Swift) and I have 8GB of RAM. I get by just fine.
You can use an LLM on your phone (I do). Some are small enough that 1GB RAM is sufficient. More RAM lets you do more.

I can’t analyse a 170,000 word story on my phone. I wouldn’t fine tune an LLM on my 24GB MacBook Pro.

I’m currrently looking at how feasible it is to run MiniMax M2 - which is going to take 100-110GB RAM even at Q3.

It’s not a matter of what you can do, it’s a matter of what you want to do.
 
  • Like
Reactions: rb2112
You can use an LLM on your phone (I do). Some are small enough that 1GB RAM is sufficient. More RAM lets you do more.

I can’t analyse a 170,000 word story on my phone. I wouldn’t fine tune an LLM on my 24GB MacBook Pro.

I’m currrently looking at how feasible it is to run MiniMax M2 - which is going to take 100-110GB RAM even at Q3.

It’s not a matter of what you can do, it’s a matter of what you want to do.

its also a function of how much youre willing to pay.

some people are building $10-100k server systems with multiple CPUs and GPUs and gobs of RAM.

others are paying out the nose for credits to use someone else's clusters.
 
its also a function of how much youre willing to pay.

some people are building $10-100k server systems with multiple CPUs and GPUs and gobs of RAM.

others are paying out the nose for credits to use someone else's clusters.
Most of the servers at work have 8 to 16GB. Some have 4GB and those are big companies I'm talking about.
I'm not going to argue about that at all, because I've seen with my own eyes and NDAs have been signed.
One company still uses Windows Server 2003 but I don't know what's it for.

The bigger issue is the storage. Just like with my Macbook. The lack of RAM doesn't bother me yet, but the lack of disk space certainly does as I don't have a USB-C external harddrivve and all of my external hard drives are rather slow and old.
 
its also a function of how much youre willing to pay.

some people are building $10-100k server systems with multiple CPUs and GPUs and gobs of RAM.

others are paying out the nose for credits to use someone else's clusters.
Exactly true. Set yourself a budget and get the most “bang for buck” you can. That path leads to the least regrets because you know you couldn’t have done more.
 
the reality is that 5-6 years is an eternity for a laptop. no need to max out the system to 'future proof'

youll want to sell and upgrade your machine in 3-4 if not before that.

youd be better off with incremental upgrades as tech progresses and as you feel out what you actually require as a student.

top ramen gets old fast! :)
😂 I don’t upgrade my  stuff often
 
Most of the servers at work have 8 to 16GB. Some have 4GB and those are big companies I'm talking about.
I'm not going to argue about that at all, because I've seen with my own eyes and NDAs have been signed.
One company still uses Windows Server 2003 but I don't know what's it for.

The bigger issue is the storage. Just like with my Macbook. The lack of RAM doesn't bother me yet, but the lack of disk space certainly does as I don't have a USB-C external harddrivve and all of my external hard drives are rather slow and old.
we're talking about servers designed for AI.

they are most certainly not using 8-16gb for this at any company anywhere.

with regards to windows at most small-medium business its likely used for OIM/LDAP/Email. and for those use cases, yes normal off the shelf servers are more than enough in most cases. though more and more companies are migrating to virtualized systems.
 
Last edited:
well, for your dev, 24GB's fine.

But you also said local LLMs.

Take a look at a list of available models.
Here is one: https://ollama.com/library?sort=popular,
just a subset of all available models,
Then see if there are any you want/need to use but cannot on 24GB machine.
For example, gpt-oss 20b is 14GB but gpt-oss 120b is 65GB.
Another, deepseek-r1 14b is 9GB, 32b is 20GB, 70b is 43GB.
Llama3.1 8b fp16 is 16GB, 70b q4km is 43GB, 70b q8 is 75GB.
and so on. This will count in addition to system resources.

I would see if you can ask around and try a friend's 24GB machine for a couple of days to get the feel for what these models provide and at what speed. You cannot do just off the size of the models, as some you can squeeze into working, but slowly, and others you think will fit just fizzle.

fyi I just traded in my M4 studio 36GB for one with more RAM in order to run more local llms. At some point you have to settle for what you really want/need.
Local LLM's with MacBook Pro? Even M3 Ultra with 512GB ram is border line meee...
 
Hey guys , i'm a software engineering student and I need a new macbook pro
I have kept an eye on the m4 pro , but i dont know if i should go with 24gb or 48gb of ram ?

Im doing mainly web dev , i will install some dev apps , productivty apps , many browser tabs

and maybe in the future i will use llm's locally and develop ML
i want this laptop to last for at least 5-6 years for me

thanks guys :)
If you can afford 48GB, go for it and don't look back (no regrets after).
 
If you can afford 48GB, go for it and don't look back (no regrets after).

I have a single SODIMM 48GB @ DDR5-4800 in my Gracemont n305 (E-Core) mini-pc (cost me USD 150 c. Fall-2024, and it continuously runs ProxMox Containers/VMs).

Also have two UDIMM 64GB @ DDR5-5600 in my EPYC Grado (4565P) system (cost me ~USD250 mid-2025, and it is doing nothing right now because I don't have the GPU I need to help it fly).

90% of the time I use my '23 M2 Studio (w/64GB)

3% of the time I use my M4 Macbook Air (w/24GB)

7% of the time I use my iPhone 13-Mini (w/"unknown?")

If everything I use housed 256GB+ of RAM, well . . . I'd probably be resting in the Mountains, rather than over-sharing, here ;)
 
  • Like
Reactions: KaliYoni
If everything I use housed 256GB+ of RAM, well . . . I'd probably be resting in the Mountains, rather than over-sharing, here ;)

If all your machines had more RAM instead of using your computers you'd be in the mountains?

I dont get it.
 
Hey guys , i'm a software engineering student and I need a new macbook pro
I have kept an eye on the m4 pro , but i dont know if i should go with 24gb or 48gb of ram ?:)
I do web developing, adobe photoshop, Final Cut Pro X, 3DFX generation, and app developing on my early 2008 MacBook pro with 4GB of 667Mhz DDR2 Ram on OS X Leopard. You will be fine with 24'GB. Its DDR5 ram and is lightning fast.
 
  • Like
Reactions: splifingate
I do web developing, adobe photoshop, Final Cut Pro X, 3DFX generation, and app developing on my early 2008 MacBook pro with 4GB of 667Mhz DDR2 Ram on OS X Leopard. You will be fine with 24'GB. Its DDR5 ram and is lightning fast.

Exactly what i am thinking, i don't understand why new machines need 10GB+ RAM. I am only guessing its for 4K and 8K content or some of the complex 3D models
 
  • Like
Reactions: goldmac2006
Local LLM's with MacBook Pro? Even M3 Ultra with 512GB ram is border line meee...
Borderline...what is meee? Inference from LLMs really depends on your needs and the model you are using. Yesterday I got perfectly acceptable results from multiple models on my 24GB M2 MBP. There was no borderline acceptability....the results were perfectly acceptable. They were also not tasks for which larger models were needed or even would be more useful.

I am curious: which task you are doing that your M3U 512 is borderline? Probably for your task you could just pay for an online service to access the largest models. I think that is a great, cost-effective solution as long as you don't want the privacy afforded by running locally.
 
Borderline...what is meee? Inference from LLMs really depends on your needs and the model you are using. Yesterday I got perfectly acceptable results from multiple models on my 24GB M2 MBP. There was no borderline acceptability....the results were perfectly acceptable. They were also not tasks for which larger models were needed or even would be more useful.

I am curious: which task you are doing that your M3U 512 is borderline? Probably for your task you could just pay for an online service to access the largest models. I think that is a great, cost-effective solution as long as you don't want the privacy afforded by running locally.
I think we use whatever model fits our system and works for us.

When I just had 24GB, I found some decent 10B models to work with my story-writing. I even pushed to some 32B models that were okay. I used a 7B model on my iPhone 15 Pro recently, and I’d say it was pretty good for my limited test.

With my 128GB, I find a 30B model seems to be working the best for me.

Today I started an “AI Challenge” - a coding project comparing the output from both online AIs and a local LLM. The local LLM is MiniMax M2 - which is around 100GB. It’ll be interesting to compare results. Maybe I should also try a smaller sub-10B local model?

A project I had a little while back was to get a local LLM to read scenes from my story in turn, create an “image generation prompt”, send that to A1111 to generate the image, then insert that image into the scene document (“illustrating my story”). Running both LLM and StableDiffusion side-by-side wouldn’t have been practical on my 24GB MBP.

I can see where 512GB would be useful. The version of MiniMax I’m using is quantised to 3-bit. I wouldn’t normally go that low, but my “limited” RAM makes it a necessity.

If you want to run Kimi-K2-Thinking locally, you’ll need anything between 250 GB (1-bit) and 2TB RAM. Who knows what the future might bring.
 
  • Like
Reactions: rb2112
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.