Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I am coming from an M1 Pro. I do photo editing in Lightroom Classic. Is there a significant difference between M4 and M5 for my use case?
Lightroom uses GPU acceleration and for some tasks uses the NPU/Neural Engine (though it's been disabled for the Denoise function due to quality issues). The GPU improvements would be noticeable in the M5 over the M4, but both would be substantially faster than the M1 variants.
 
I am getting really tired of Apple playing this game of “here’s our brand new processor! OR…you can buy the super maxxed ultra crazy version of the previous processor at a super maxxed premium price!!!”
I mean that's how most chip manufacturers operate. When introducing a new generation, Intel and AMD usually release consumer or mobile chips first and the high end workstation Xeon and Threadripper chips are the last to be updated to new generation architecture because they're lower volume. TSMC likely doesn't have the fab capacity for Apple to update all of its chips and all of its devices at the same time, plus keep up with demand for A series chips.
 
Does anyone know what other "Tweaks" Apple makes for different products containing the M5 Chip?

I was fully expecting the Vision Pro with M5 to be VASTLY more powerful than the old original M2 version
And so it should be.

But I heard in a VR podcast that the story was that Apple have "Tuned" it more for power and efficiently as opposed to performance in the upgraded Vision Pro.

No-one actually has it yet, so we don't have any comparison reviews, but does anyone know any more?
An M5 may not just be an M5 because it's an M5 perhaps?
 
I know it won't happen, but I'd be curious to see the performance of an Apple Silicon server cluster/blade computing. With something like the Mac Studio, but optimize the cooling for blade server systems, I'd wonder how many can be used in a standard rack?
Honestly, I wish something like the M series ran vSphere. I bet it would really kick ass.
 
The gains from M4 to M5 are, CPU-wise, just incremental. Not bad either with the best single-core score in GB6 of any consumer CPU, and a multi-core score near the M1 Ultra, which is a lot. But compared to the M4, it’s just an incremental 15% aided by the new process (N3P) and the new efficiency cores (e-cores), with much improved performance now.


However, while the changes in the CPU are good but just incremental, the improvements in the GPU side are much more significant. I suspect this is a whole new GPU core architecture, because both the graphics in general and Ray Tracing in particular are much improved.

But the biggest, dare to say almost disruptive upgrade is in AI operations, which apparently now will make an extensive use of the GPU thanks to the tensor cores. Now more than ever it is really tempting to equip our M5 machine with the maximum amount of RAM available…


I am coming from an M1 Pro. I do photo editing in Lightroom Classic. Is there a significant difference between M4 and M5 for my use case?
The M4 is already very good but if you want to make a little long-term investment, I’d go with the M5. Not only it has much better graphics performance but also will handle any hypothetical future AI features much, much better.

Lightroom uses GPU acceleration and for some tasks uses the NPU/Neural Engine (though it's been disabled for the Denoise function due to quality issues). The GPU improvements would be noticeable in the M5 over the M4, but both would be substantially faster than the M1 variants.
Here’s a better answer from a fellow MR user.
 
Last edited:
So much effort wasted on “AI” junk. No mention of single threaded speeds, which is what the overwhelming majority of tasks rely on, especially the tasks a user buying a base chip is going to actually use it for.
Video encoders are "junk" -- unless you need them for your workflow.
GPUs are "junk" for many people.
NPUs are "junk" for people who don't use apps that access them.
Matrix-multiply accelerators are "junk" -- until you start needing matrix manipulation.
Efficiency cores are "junk" -- unless you want to conserve battery life when doing background tasks.

Apple solves many problems with each of their processor architectures. So does Qualcomm. So does ARM. So does everybody! Everything is a blend. If you obsess about single-processor speed, you would be better off inventing a time machine and going back to before 1999 when there were no multi-processors or GPUs. Or you could design and fab your own #!$$ processor. Good luck with that.

It is pointless to grouse about "wasted" real estate on any Apple general processor architecture. The M5 architecture does an amazing diversity of tasks amazingly well, and you have no idea when your needs may shift to another application. Appreciate the brilliance of its design and architecture. If you don't appreciate it, you can always go elsewhere.
 
  • Like
Reactions: Howard2k
Honestly even every 5 years feels excessive now. I was using a 2013 trash can Mac Pro until the end of 2023, and my M2 Max Studio will probably last me another decade. For Logic I really could’ve stuck to the old machine, I feel the difference mostly in web tasks and especially 4K video editing.
I agree that most of us don’t even need to upgrade every five years. My base model M4 MacBook Air replaced a 10 year old Dell desktop. The M4 is amazing, but for my routine of web browsing, spreadsheets and letter writing, I don’t see any huge productivity increase.
I waited for the M4 because I wanted two external monitors and to be able to use the built in screen at the same time. Speed is a bonus. The microphone is probably a bigger deal for me than speed. Calls on my Mac are fantastic quality. It would be hard to go back to a pc.
 
If I can make a suggestion; articles like this one would make sense if you were to compare it to the last 4 or 5 versions of the same hardware. Most people aren't going to bother upgrading to each iteration of said device.

Make a table, make it readable, make it crystal clear what the gains are over the years. That would make for some proper journalism, people will be able to make a sound decision whether to upgrade or not, and they will be happy to read in-depth articles on all differences. And the author will be vastly more proud of said work.
Do I have a youtuber for you...

He does exactly what you are asking for. Compares M series SOC`s performance for lightroom etc.


 
  • Like
Reactions: Populus
What I would like to see is Apple collaborating with different actors to harness all the new capabilities of the M5 and future SOCs

- Apple collaborating with Local AI content generation platforms like Stable Diffusion and LLMs.
- Apple collaborating with EPIC to improve Unreal Engine performance on Mac.
- Apple collaborating with more Game Studios.
 
  • Like
  • Love
Reactions: Yvan256 and Populus
I know it won't happen, but I'd be curious to see the performance of an Apple Silicon server cluster/blade computing. With something like the Mac Studio, but optimize the cooling for blade server systems, I'd wonder how many can be used in a standard rack?
My M4 Max studio never gets warm when compiling or heavy programming tasks. Even gaming with Cyberpunk (heavy GPU) or other games like FrostPunk 2 (heavy gpu and CPU) it barely gets warm. I have never heard the fans once.

Even when I was being super extra and pushing heavy duty local LLMs while compiling it got slightly warm but no fan noise ever. Pretty nuts
 
  • Like
Reactions: Gummiwise
How is the single threaded CPU performance? This is the most important for performance. I suspect there's a reason why this number isn’t publicly disclosed.
the age of single threaded performance improvements is largely over. Even when you look at M3 to M4 improvements whose single-threaded score increased 25% the scores were skewed by large improvements in a handful of AI tests making use of new matrix instructions like object detection (+114%) whereas code compilation was only up slightly (+14%). The M5 continues this trend where outside of AI tasks half the meager single-threaded improvement is explained by clock speed bumps with very little improvement made by the architecture itself. This is not a criticism of Apple, who is actually scaling single threaded CPU performance better than other chip designers ... this is just the end of an era.
 
  • Like
Reactions: delsoul
What I would like to see is Apple collaborating with different actors to harness all the new capabilities of the M5 and future SOCs

- Apple collaborating with Local AI content generation platforms like Stable Diffusion and LLMs.
- Apple collaborating with EPIC to improve Unreal Engine performance on Mac.
- Apple collaborating with more Game Studios.
1. Apple pushes a lot of open source code for local AI, see MLX: https://github.com/ml-explore, most LLMs have a MLX variant including stable diffusion. Check our LM Studio to very easily find them and try them out.
2. Unreal Engine 5 natively supports Metal now and games by default are set to target macs, unfortunately game studios often disable it.
3. Yes, please.
 
  • Like
Reactions: wyrdness
if 30% graphic increase in performance means games run 30% better, thats pretty significant.

as for AI, if you are not running local AI models, is that any benefit to the user?
 
  • Like
Reactions: unomas77
Does anyone know what other "Tweaks" Apple makes for different products containing the M5 Chip?

I was fully expecting the Vision Pro with M5 to be VASTLY more powerful than the old original M2 version
And so it should be.

But I heard in a VR podcast that the story was that Apple have "Tuned" it more for power and efficiently as opposed to performance in the upgraded Vision Pro.

No-one actually has it yet, so we don't have any comparison reviews, but does anyone know any more?
An M5 may not just be an M5 because it's an M5 perhaps?
The new AVP is pushing 10% more pixels and maxing out at 120hz instead of 90hz. Also running AI workflows and non-AI apps at twice the speed of the M2 it is replacing. That said improving power and efficiency is important because the M2 ran hot on the AVP, something that goes on your face.
 
if 30% graphic increase in performance means games run 30% better, thats pretty significant.

as for AI, if you are not running local AI models, is that any benefit to the user?
Yes-ish. Apple for os26 gave developers the option to use the built in AI models in macOS to locally do things. Any app that takes advantage of this will run much faster behind the scenes. So the answer is depending on developer support.

For example we are currently experimenting with using Apple's built in AI models to parse scanned in medical paperwork so we don't have to deal with privacy risks and HIPAA compliance issues.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.