Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Flowstates

macrumors 6502
Original poster
Aug 5, 2023
377
566
I'm helping my mother over the holidays and helping her with migrating to a new mac machine.

She had inherited of my old intel MBP (Quite the poisonous gift) whose vega graphics card is starting to show age and graphical bugs.

The original machine had 32/1TB.

I'm considering two machines:

- MBA 15' M4 16/512
- MBA 15' M3 24/1000

Was considering the M4 intitially but found a new machine with specs #2 for the same price. I'm heavily leaning towards getting the M3 with more ram and memory. I just want to get someone else's opinion.

Mostly Excel / Notion / Mail machine, but she has taken a keen interest in genAI.
 
  • Like
Reactions: arc of the universe
For someone who had inherited an old MBP, the bump from M3 > M4 is negligible for general computing use. However, the bump in RAM and storage will be of more value in the ensuing years. One could make a strong argument your mother would not notice the difference between an M1/M2/M3/M4.
 
For someone who had inherited an old MBP, the bump from M3 > M4 is negligible for general computing use. However, the bump in RAM and storage will be of more value in the ensuing years. One could make a strong argument your mother would not notice the difference between an M1/M2/M3/M4.

She did mention AI though, and the M4 has the significantly upgraded NPE. Perhaps she's not installed local AI models but all the same, I think that's something to keep in mind.

M3 NPE 18 TFOPS, M4 NPE 38 TFOPS.
 
For someone who had inherited an old MBP, the bump from M3 > M4 is negligible for general computing use. However, the bump in RAM and storage will be of more value in the ensuing years. One could make a strong argument your mother would not notice the difference between an M1/M2/M3/M4.

Why? If someone doesn't use apps that take advantage of 24GB RAM, it's of no use. Someone having a 1TB iPhone doesn't make it more valuable down the road if they never use more than 128GB.

On the other hand, the user will always benefit from the faster M4 chip.
 
  • Like
Reactions: Isamilis
Thank you for the interest and insights everyone.

I asked for a bit more information about the GenAI part and it has emerged that she mostly uses APIs (OAI, Claude) and discord based tools.

Given the lack of interest when showing the playground (what a joke) on my machine, and the fact that Selfhosted Llama 3.2 3B (what I think is a reasonnable model to load on device) is not useful to her (we tried it out). I don't think that on device compute (even if the Numbers put forth by @Howard2k are impressive) will edge the convenience of more ram and storage (saving all of those jpegs and managing heaps of opened windows and files without dipping into swap).

I'll edge towards the M3 for the time being.
 
Thank you for the interest and insights everyone.

I asked for a bit more information about the GenAI part and it has emerged that she mostly uses APIs (OAI, Claude) and discord based tools.

Given the lack of interest when showing the playground (what a joke) on my machine, and the fact that Selfhosted Llama 3.2 3B (what I think is a reasonnable model to load on device) is not useful to her (we tried it out). I don't think that on device compute (even if the Numbers put forth by @Howard2k are impressive) will edge the convenience of more ram and storage (saving all of those jpegs and managing heaps of opened windows and files without dipping into swap).

I'll edge towards the M3 for the time being.

And to add, even with the numbers put forth above about the NPE, if she's running a local LLM and it's not using the NPE (GPU instead) then it's moot. She can run them off either the NPE or GPU depending on the platform. For the M4 they're actually pretty close (10 Core GPU and NPE). For the M3 she might be fine just running off the GPU instead of the M3 NPE. Or perhaps neither the M3 nor M4 have the horsepower she needs. That's also possible :D
 
  • Love
Reactions: Flowstates
And to add, even with the numbers put forth above about the NPE, if she's running a local LLM and it's not using the NPE (GPU instead) then it's moot. She can run them off either the NPE or GPU depending on the platform. For the M4 they're actually pretty close (10 Core GPU and NPE). For the M3 she might be fine just running off the GPU instead of the M3 NPE. Or perhaps neither the M3 nor M4 have the horsepower she needs. That's also possible :D

The confusion around the wording for modern processing got to me. Long gone are the days of graphics core being the kings of paralellization at scale.

I have an unused server with a 3090 sitting at home, serving my lowly docker containers. We are discussing about using Parsec to use stable-diffusion on device. Mind you that what I've seen, self-hosted lags behind cloud-based unless one has some very serious homelabbing gear.

One can dream, although the energy costs themselves would bleed that part of the budget dry.

I am starting to be very cognescent about the implicitions of mentionning the prowesses of a specced mac Studio.
 
  • Like
Reactions: Howard2k
I have an AMD RX 7800 XT in my gaming PC and it spanks my M4 for AI, although I'm just fooling around with AI for interest. But a Studio is a different beast altogether!
 
  • Like
Reactions: Flowstates
Why? If someone doesn't use apps that take advantage of 24GB RAM, it's of no use. Someone having a 1TB iPhone doesn't make it more valuable down the road if they never use more than 128GB.

On the other hand, the user will always benefit from the faster M4 chip.
Under Apple's Unified Memory Architecture all apps constantly use RAM to good benefit. As time goes on over the next few years apps and the OS will take further advantage of that architectural Mac SoC fact. Especially when multitasking with mail, browser, messaging, etc. open concurrently; web pages can be big RAM hogs.

M3 and M4 chips both are already wicked fast (llfx reference) and the difference between base M3 and base M4 are unlikely to be limiting to her work of computing. If the chip is to be limiting it will probably be because of downgrading to MBA from MBP and choosing base level chips.
 
  • Like
Reactions: Flowstates
Why? If someone doesn't use apps that take advantage of 24GB RAM, it's of no use. Someone having a 1TB iPhone doesn't make it more valuable down the road if they never use more than 128GB.

On the other hand, the user will always benefit from the faster M4 chip.
In my experience, web pages and email will load just as quick on an M1 as they do on an M4 so I've come to prefer RAM and storage over compute power for general computing needs. We really don't know how much RAM or storage will be used over the lifespan of the system. It depends on what computing activities occur over that lifespan. I will concede a debate over 16 vs 24 GB RAM probably isn't worth having, but, at least, one will certainly be able to store more than 512GB of data w/out an external drive inconviently hanging off your laptop. Everybody has their own pragmatic way of making decisions.
 
In my experience, web pages and email will load just as quick on an M1 as they do on an M4 so I've come to prefer RAM and storage over compute power for general computing needs. We really don't know how much RAM or storage will be used over the lifespan of the system. It depends on what computing activities occur over that lifespan. I will concede a debate over 16 vs 24 GB RAM probably isn't worth having, but, at least, one will certainly be able to store more than 512GB of data w/out an external drive inconviently hanging off your laptop. Everybody has their own pragmatic way of making decisions.
I agree with you. Right after M2 MacBook arrived at the Apple stores I purchased a "base" 13" as a present to my wife. The only limiting factor I can see cropping up in the future is in the amount of RAM and internal storage.
Both my wife and I aren't interested at all in Apple AI. I still haven't setup AI in our MacBooks and iPhones :)
 
I have an AMD RX 7800 XT in my gaming PC and it spanks my M4 for AI, although I'm just fooling around with AI for interest. But a Studio is a different beast altogether!
All M4s are not the same. Which M4 gets spanked, and in what Mac with what RAM?
 
In my experience, web pages and email will load just as quick on an M1 as they do on an M4 so I've come to prefer RAM and storage over compute power for general computing needs. We really don't know how much RAM or storage will be used over the lifespan of the system. It depends on what computing activities occur over that lifespan. I will concede a debate over 16 vs 24 GB RAM probably isn't worth having, but, at least, one will certainly be able to store more than 512GB of data w/out an external drive inconviently hanging off your laptop. Everybody has their own pragmatic way of making decisions.

Speedometer/BrowserBench is 50% faster on M4 than M1 from my own testing.

Web pages get more complex each year. Speed may not matter now, but a few years down the road, it'll make a difference. For the longest time, people said A9 on iPhone 6S was all they needed. Then it became A13 iPhone 11, now it's A18.
 
  • Like
Reactions: Flowstates
Speedometer/BrowserBench is 50% faster on M4 than M1 from my own testing.

Web pages get more complex each year. Speed may not matter now, but a few years down the road, it'll make a difference. For the longest time, people said A9 on iPhone 6S was all they needed. Then it became A13 iPhone 11, now it's A18.
Of course speed matters. But especially a few years down the road IMO for most workflows having 50% more RAM will be more important to speed than the difference between base M3 versus base M4. But in reality this discussion is really academic, because the two boxes will be hella close except perhaps for some specific apps.
 
  • Like
Reactions: Flowstates
I don’t know how old your mum is, but I do know that if you are choosing her computer instead of her choosing her own, her requirements are slim.

Get her an M1 MacBook Air of eBay with a screen she likes and enough memory for her photos.
 
  • Like
Reactions: Flowstates
This is the MacBook Air forum; we’re talking about M4 MacBook Airs.
Actually it is the Mac Rumors Forum and the thread involved migrating from Intel MBP to two MBA choices. The comment "...an AMD RX 7800 XT in my gaming PC and it spanks my M4 for AI" sounded kind of generic, and the AMD RX 7800 XT is a high performance graphics card with 16 GB memory just on the card alone, not something I would expect to compare to Apple's lowest end M4 chip running in the RAM-constrained low end MBA. Plus of course there are two MBA M4 chip choices and RAM choices as well. I will not apologize for not assuming.

The point of my question was to find out exactly what that high performance graphics card was "spanking," since I would normally expect an AMD RX 7800 XT to be compared against an M4 Max chip running in a Studio or at least in a MBP. But you are correct the thread has only discussed Intel MBP, high performance AMD, and MBAs.
 
  • Like
Reactions: Howard2k
Actually it is the Mac Rumors Forum and the thread involved migrating from Intel MBP to two MBA choices. The comment "...an AMD RX 7800 XT in my gaming PC and it spanks my M4 for AI" sounded kind of generic, and the AMD RX 7800 XT is a high performance graphics card with 16 GB memory just on the card alone, not something I would expect to compare to Apple's lowest end M4 chip running in the RAM-constrained low end MBA. Plus of course there are two MBA M4 chip choices and RAM choices as well. I will not apologize for not assuming.

The point of my question was to find out exactly what that high performance graphics card was "spanking," since I would normally expect an AMD RX 7800 XT to be compared against an M4 Max chip running in a Studio or at least in a MBP. But you are correct the thread has only discussed Intel MBP, high performance AMD, and MBAs.


Without doubt the M4 Pro/Max and other variants are monsters. It’s a super architecture. But my MBA M4 knows its limit and plays within it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.