Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Like I said, nothing terrible, but I'm hoping Apple has significant design improvements up their sleeve for the A17 or so.

I wouldn’t really expect any significant architectural improvements until several generations into the Mx-series. After they work the kinks out of the SoC design. If I had to guess anything, I think they’re probably working on their own next generation ISA that is more optimized and scales better for their specific needs.
 
Last edited:
i was really hoping for Thunderbolt5, although that's been wishful thinking.

Peripherals have been capped at 40gbps thunderbolt 3 speed for what, 7 years already. I want my 980 Pro to actually perform at its capacity, not nerfed to half of it.
 
I’m thinking of replacing my son’s 2014 iMac 5k 27” with the M2Pro Mini. Would there be a noticeable improvement on the user experience front (apart from ditching the need to vacuum the ventilation slots weekly)?

I’d need a suitable monitor as well, or course. Any suggestions? Gotta be fit for gaming, too.

Just a heads-up, you're going to be "upgrading" from a machine with a great display AND an older OS version that still supported software antialising. Newer versions of macOS no longer provide the software AA, so a high-DPI monitor would be highly encouraged or you will see a distinct drop in text quality.
 
  • Like
Reactions: dotnet
If the difference in cost between the M2Pro and M2, specked out the same for ram and SSD size, is it worth the $300, You get 10 core cpu 16 core gpu vs 8 core cpu 10 core gpu, 2 extra Thunderbolt ports, HDMI 2.1 support. Wondering what everyones thought on this is? Thx

HDMI 2.1 support is going to be the standard going forward. It’s not a phase that Apple is going through, right?

You could call that future-proofing, just as you could with Wifi 6e, and the latest Bluetooth 5.3 version.
 
Last edited:
Either the Mac Pro or the (new) iMac Pro or the Ultra or all three will probably get the M3 Pro/Max/Ultra by the end of the year... That's how they get ya. Always keep you wanting for the next big thing..
 
Scores are well and good, but it's still just integrated graphics at the end of the day..

13950hx cpu (24 cores) with rtx 4090 (9728 cores) gpu laptops will smoke these utterly and completely..
On synthetic benchmarks, yes, in real-world use not so much. I ran a bunch of timed tests of the typical tasks I do in Adobe Photoshop and Lightroom and even with a Nvidia RTX 6000 that has huge compute scores on Geekbench, and a processor that matched the single-core and had higher multicore scores, my Mac Studio Ultra actually destroyed the PC. There's so much more to day to day use and application performance; these synthetic benchmarks are perhaps good for comparing generation to generation on the same platform, but clearly do not work comparing across platforms and do not reflect real-world application performance.
 
How is the battery life, fan noise and heat on those 13th gen Intel laptops?

And what about the performance when unplugged?
Yeah Intel's Cinebench scores are great and all but I simply cannot go back to loud, hot laptops with poor battery life.
 
Yeah Intel's Cinebench scores are great and all but I simply cannot go back to loud, hot laptops with poor battery life.
I have a top-spec 12th-gen Intel laptop (with 8GB Nvidia RTX A2000 graphics) on the way for someone at work that uses an application that isn't available for MacOS. I'm planning on doing some benchmarking and I'll toss that into the batch. I suspect that even with the high GB scores it won't match up in real-world use to the 16" M1 Max, and on top of that it weighs 3lbs more and no doubt will be hot and sound like a jet engine. I mean, Lenovo even has this line on the webpage "...redesigned vents send hot air out the back of the system, not the sides." My M1 Max has never once ramped up the fans to where I can hear them, and has never felt more than slightly warm, never hot.
 
On synthetic benchmarks, yes, in real-world use not so much. I ran a bunch of timed tests of the typical tasks I do in Adobe Photoshop and Lightroom and even with a Nvidia RTX 6000 that has huge compute scores on Geekbench, and a processor that matched the single-core and had higher multicore scores, my Mac Studio Ultra actually destroyed the PC. There's so much more to day to day use and application performance; these synthetic benchmarks are perhaps good for comparing generation to generation on the same platform, but clearly do not work comparing across platforms and do not reflect real-world application performance.
Agreed, benchmarks don't mean a whole lot.

But on the 4090 laptop gpu they are quoting 2.5x speedup on creator apps verses their previous best dedicated laptop gpu the 3090 ti, https://wccftech.com/nvidia-geforce...-3-2-4x-gain-in-creation-apps-versus-3080-ti/

Unless there are software issues, I have no idea how this wouldn't blow M2 gpu out of the water.. Both intel and nvida are really stepping up their game thanks to Apple and AMD.
 
Agreed, benchmarks don't mean a whole lot.

But on the 4090 laptop gpu they are quoting 2.5x speedup on creator apps verses their previous best dedicated laptop gpu the 3090 ti, https://wccftech.com/nvidia-geforce...-3-2-4x-gain-in-creation-apps-versus-3080-ti/

Unless there are software issues, I have no idea how this wouldn't blow M2 gpu out of the water.. Both intel and nvida are really stepping up their game thanks to Apple and AMD.
Are you familiar with the term "empty comparison?" There's literally no data on these graphs. The only "creator" app I see is DaVinci Resolve and again, there's no baseline besides saying it's a certain percentage faster than the previous model of the same laptop. That's entirely useless as a measure of performance unless all you care about is those two laptop models. I mean even his followers on YouTube are skewering him for this. As an aside, this is what I hate about the death of real technology media and the rise of the "YouTube expert." Anyone with a camera and enough following becomes an expert, no matter how garbage their methodology is.

You also clearly don't understand the huge advantage the unified memory architecture gives GPU performance on Apple Silicon--with over 6x the bandwidth of even PCIe 5 16x, the GPU cores can constantly be cranking.
 
  • Like
Reactions: FriendlyMackle
Are you familiar with the term "empty comparison?" There's literally no data on these graphs. The only "creator" app I see is DaVinci Resolve and again, there's no baseline besides saying it's a certain percentage faster than the previous model of the same laptop. That's entirely useless as a measure of performance unless all you care about is those two laptop models. I mean even his followers on YouTube are skewering him for this. As an aside, this is what I hate about the death of real technology media and the rise of the "YouTube expert." Anyone with a camera and enough following becomes an expert, no matter how garbage their methodology is.

You also clearly don't understand the huge advantage the unified memory architecture gives GPU performance on Apple Silicon--with over 6x the bandwidth of even PCIe 5 16x, the GPU cores can constantly be cranking.
The performance differences between the 40xx architecture and 30xx Nvidia architecture are well known and tested at this point. The 40xx really is much faster in terms of fps for games, and there’s plenty of benchmarks to for the creator Apps. There are plenty of benchmarks out there: https://www.pugetsystems.com/labs/a...e-rtx-4090-24gb-content-creation-review-2374/ for example that show it is a massive upgrade to the 30xx.

Me thinks you might be a little too caught up in the reality distortion field to see clearly if you think that integrated graphics compares to top of the line discreet graphics chips.
 
The performance differences between the 40xx architecture and 30xx Nvidia architecture are well known and tested at this point. The 40xx really is much faster in terms of fps for games, and there’s plenty of benchmarks to for the creator Apps. There are plenty of benchmarks out there: https://www.pugetsystems.com/labs/a...e-rtx-4090-24gb-content-creation-review-2374/ for example that show it is a massive upgrade to the 30xx.

Me thinks you might be a little too caught up in the reality distortion field to see clearly if you think that integrated graphics compares to top of the line discreet graphics chips.
LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.

Absolutely no one has claimed that Apple can match a 4090--nor are they even *trying* to do so in these machines. To start with it's huge and uses a huge amount of power and cooling: to quote from the Puget article "Beyond NVLink, another concern we have about the RTX 4090 is simply how much power it demands (and how much heat output that will translate to). The physical design of the card is going to make using more than two RTX 4090 cards impossible without liquid cooling, but even then, you will find yourself to be power limited very quickly." I mean, a 4090 is about the size of an entire Mac Mini and consumes vastly more power just by itself, without the whole rest of the computer. Assuming Apple creates a discrete GPU for something like the Apple Silicon Mac Pro, then we can have this discussion. That you're bringing up here shows you're trying to tilt the deck.

The majority of the gains in the new Nvidia GPUs is related to rendering--great for games, great for 3D rendering apps and some video editing. Heck, I have an A6000 I use for that myself--its fabulous for 3D rendering and worth the price. But that comes at a cost of massive size, power requirements, heat and a pretty heft price as well for basically a niche use. For general purpose use and things like the Adobe Suite, that's all basically wasted power as it's not needed at all. That's all "content creation" as well, so the article link is a misnomer.

My personal experience using Windows workstations with previous-generation RTX4000, 5000 and 6000 series cards and using the new A6000 card is that while there are plenty of advancements there it's mostly entirely useless to the majority of applications outside of games and 3D rendering (and on the video side, DaVinci Resolve does make good use of it, but not all video apps do). I haven't see significant performance gains on the PC side for Lightroom and Photoshop in years; heck, even Puget recommends an old 3060 series for those uses. On the other hand, the performance difference in Lightroom and Photoshop is very noticeable and easy to benchmark. I have a whole series of graphs over on a photography-centric site that show just how fast a Mac Studio Max or Ultra is compared to some extremely powerful PC CPU/GPU combinations.

The Apple Silicon versions of Photoshop and Lightroom are both optimized quite well and leverage the Neural engines very effectively in addition to the GPU cores. Remember, unlike a discrete GPU and CPU this is all happening in very high speed pooled memory with a ton more bandwidth. That has a measurable effect.
 
LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.

Absolutely no one has claimed that Apple can match a 4090--nor are they even *trying* to do so in these machines. To start with it's huge and uses a huge amount of power and cooling: to quote from the Puget article "Beyond NVLink, another concern we have about the RTX 4090 is simply how much power it demands (and how much heat output that will translate to). The physical design of the card is going to make using more than two RTX 4090 cards impossible without liquid cooling, but even then, you will find yourself to be power limited very quickly." I mean, a 4090 is about the size of an entire Mac Mini and consumes vastly more power just by itself, without the whole rest of the computer. Assuming Apple creates a discrete GPU for something like the Apple Silicon Mac Pro, then we can have this discussion. That you're bringing up here shows you're trying to tilt the deck.

The majority of the gains in the new Nvidia GPUs is related to rendering--great for games, great for 3D rendering apps and some video editing. Heck, I have an A6000 I use for that myself--its fabulous for 3D rendering and worth the price. But that comes at a cost of massive size, power requirements, heat and a pretty heft price as well for basically a niche use. For general purpose use and things like the Adobe Suite, that's all basically wasted power as it's not needed at all. That's all "content creation" as well, so the article link is a misnomer.

My personal experience using Windows workstations with previous-generation RTX4000, 5000 and 6000 series cards and using the new A6000 card is that while there are plenty of advancements there it's mostly entirely useless to the majority of applications outside of games and 3D rendering (and on the video side, DaVinci Resolve does make good use of it, but not all video apps do). I haven't see significant performance gains on the PC side for Lightroom and Photoshop in years; heck, even Puget recommends an old 3060 series for those uses. On the other hand, the performance difference in Lightroom and Photoshop is very noticeable and easy to benchmark. I have a whole series of graphs over on a photography-centric site that show just how fast a Mac Studio Max or Ultra is compared to some extremely powerful PC CPU/GPU combinations.

The Apple Silicon versions of Photoshop and Lightroom are both optimized quite well and leverage the Neural engines very effectively in addition to the GPU cores. Remember, unlike a discrete GPU and CPU this is all happening in very high speed pooled memory with a ton more bandwidth. That has a measurable effect.
Well, I'm talking about the laptop 4090 which runs in well a laptop, not the 3 card slot variation. I can see you are passionate about this subject. But man I don't get what magic do you think is in the Apple SOC that means it can defy physics? The Ultra has what 800 GB/s bandwidth which is great. But the SOC gets so big with the ultra that it doesn't even seem to scale like it should. It overheats, and it doesn't scale. There's only so big that SOC can get. That's why Apple abandoned the Extreme. The 4090 has 1008GB/s bandwidth. And 24 GB GDDR6X with 386 bit interface. I mean it costs like over $1500 (the desktop card) and the mobile one won't be cheap. Are you really saying there's any planet where Apple's version of integrated graphics compares? Also the AMD 7040 is coming out and should have comparable gpu (RDNA3). I don't think we know the specs, but I'm sure it will be very comparable to the M2.

AI is a whole nother ballgame, but NVIDIA is the AI hardware leader, so I'm not sure why you think the Apple chips would have better AI than NVIDIA lol. Again, I can only assume you are on the Apple marketing team or something. Anyway, I'm done responding here.
 
Well, I'm talking about the laptop 4090 which runs in well a laptop, not the 3 card slot variation. I can see you are passionate about this subject. But man I don't get what magic do you think is in the Apple SOC that means it can defy physics? The Ultra has what 800 GB/s bandwidth which is great. But the SOC gets so big with the ultra that it doesn't even seem to scale like it should. It overheats, and it doesn't scale. There's only so big that SOC can get. That's why Apple abandoned the Extreme. The 4090 has 1008GB/s bandwidth. And 24 GB GDDR6X with 386 bit interface. I mean it costs like over $1500 (the desktop card) and the mobile one won't be cheap. Are you really saying there's any planet where Apple's version of integrated graphics compares? Also the AMD 7040 is coming out and should have comparable gpu (RDNA3). I don't think we know the specs, but I'm sure it will be very comparable to the M2.

AI is a whole nother ballgame, but NVIDIA is the AI hardware leader, so I'm not sure why you think the Apple chips would have better AI than NVIDIA lol. Again, I can only assume you are on the Apple marketing team or something. Anyway, I'm done responding here.
You keep trying to put words in my mouth. Where did I claim it would defy physics? You post a benchmark comparing a desktop 4090, then switch to try and talk about a laptop version. You can't keep moving your goalposts. Show me proof the Ultra overheats? Because it doesnt. I have one. I can stress it hard and the fans never even spin up. In fact there's a general argument Apple could have pushed it harder and let it get a little hotter to get more performance.

Again, you reveal your ignorance: The bandwidth between the GPU memory and the GPU core is one thing, but it's still hobbled by the memory bandwidth of the PCI bus; that maxes out way slower, and that's going to be a bottleneck. Go back up and read what I posted about the max bi-directional speed of PCIe 5.0 x16.

Again, back to the neural engines. It doesn't matter if most people use Nvidia GPU's for specialized AI work; what matters is how the software can use those capabilities. You've also got the same issue of the PCIe bus, BTW. I clearly stated that Adobe has been able to leverage the neural engines in Apple silicon to boost performance and for whatever reason it hasn't been able to get some of the same performance out of the Nvidia cards (and I'll refer you back to tests by Puget that clearly demonstrate that jumping to the newer, higher-powered Nvidia GPUs yields little to no performance gain in Photoshop and Lightroom).

Resorting to speculation on an upcoming AMD GPU is a joke. Let's speculate about the M3 GPU then too? No, we're talking about real-world performance today, not speculation or synthetic benchmarks. Looking at your post history it seems like you're a PC user here to troll people. I'm not going to waste more time chasing your specious arguments.
 
  • Like
Reactions: aytan
Apologies if this has been covered in the previous 10 pages but I’m not going to read the whole thread.

Just looking at the geekbench browser these results seem really impressive to me.


Based on what’s currently listed there, this max mini would be top on single core and 5th on multicore. Beaten only by some configs of Mac Pro and one config of Mac Studio.
 
Apologies if this has been covered in the previous 10 pages but I’m not going to read the whole thread.

Just looking at the geekbench browser these results seem really impressive to me.


Based on what’s currently listed there, this max mini would be top on single core

Technically, yes, but I wouldn't overthink that. The current top one, for example, also has some scores up to 1952. Geekbench results aren't that stable.

The M2, M2 Pro, and M2 Max all have the same cores (just different amounts of them) running at the same clock, so their single-core results will be very, very similar. (There's slight differences e.g. because the Air needs to throttle more because it has no fan, and because the Pro and Max have more memory bandwidth, but they're minuscule for benchmarking purposes.)

and 5th on multicore. Beaten only by some configs of Mac Pro and one config of Mac Studio.

Yes; the Pro here also benefits because it comes with more cores than the M1 Pro did.
 
Probably a difference between the same laptop with different configs!

👍
I've got the base 1tb model with only 16 gb of ram so that doesnt explain it. All the base models like mine i've seen have a multi core score of 12000 or above so thats why i'm curious where macrumors came up with their numbers
 
I have a top-spec 12th-gen Intel laptop (with 8GB Nvidia RTX A2000 graphics) on the way for someone at work that uses an application that isn't available for MacOS. I'm planning on doing some benchmarking and I'll toss that into the batch. I suspect that even with the high GB scores it won't match up in real-world use to the 16" M1 Max, and on top of that it weighs 3lbs more and no doubt will be hot and sound like a jet engine. I mean, Lenovo even has this line on the webpage "...redesigned vents send hot air out the back of the system, not the sides." My M1 Max has never once ramped up the fans to where I can hear them, and has never felt more than slightly warm, never hot.

Dude you got a Dell!
 
LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.

Absolutely no one has claimed that Apple can match a 4090--nor are they even *trying* to do so in these machines. To start with it's huge and uses a huge amount of power and cooling: to quote from the Puget article "Beyond NVLink, another concern we have about the RTX 4090 is simply how much power it demands (and how much heat output that will translate to). The physical design of the card is going to make using more than two RTX 4090 cards impossible without liquid cooling, but even then, you will find yourself to be power limited very quickly." I mean, a 4090 is about the size of an entire Mac Mini and consumes vastly more power just by itself, without the whole rest of the computer. Assuming Apple creates a discrete GPU for something like the Apple Silicon Mac Pro, then we can have this discussion. That you're bringing up here shows you're trying to tilt the deck.

The majority of the gains in the new Nvidia GPUs is related to rendering--great for games, great for 3D rendering apps and some video editing. Heck, I have an A6000 I use for that myself--its fabulous for 3D rendering and worth the price. But that comes at a cost of massive size, power requirements, heat and a pretty heft price as well for basically a niche use. For general purpose use and things like the Adobe Suite, that's all basically wasted power as it's not needed at all. That's all "content creation" as well, so the article link is a misnomer.

My personal experience using Windows workstations with previous-generation RTX4000, 5000 and 6000 series cards and using the new A6000 card is that while there are plenty of advancements there it's mostly entirely useless to the majority of applications outside of games and 3D rendering (and on the video side, DaVinci Resolve does make good use of it, but not all video apps do). I haven't see significant performance gains on the PC side for Lightroom and Photoshop in years; heck, even Puget recommends an old 3060 series for those uses. On the other hand, the performance difference in Lightroom and Photoshop is very noticeable and easy to benchmark. I have a whole series of graphs over on a photography-centric site that show just how fast a Mac Studio Max or Ultra is compared to some extremely powerful PC CPU/GPU combinations.

The Apple Silicon versions of Photoshop and Lightroom are both optimized quite well and leverage the Neural engines very effectively in addition to the GPU cores. Remember, unlike a discrete GPU and CPU this is all happening in very high speed pooled memory with a ton more bandwidth. That has a measurable effect.

Adobe ain't stupid. They optimize their favorite software for the most relevant CPU market share and that CPU market is low core count, single core performance, mediocre GPU. No body would be using their apps if it wasn't like that. That's why the best performance you will ever get out of Photoshop is on high end gaming chips. Adobe apps don't scale well with the cores and nor they are interested in more than 8 cores, even their NLE sees no return above 8 cores. After Effects has been unlocked last year so it can use as many cores as you throw at it quite efficiently when it renders. Overall getting Apple Studio to work in Adobe suite is a waste really. Illustrator will come to a halt no matter what you have inside the box :) the worst Adobe app ever!

When it comes to shared memory there are workflows that will hog your 32GB or even 64GB Apple Soc because they crave both RAM and GPU video memory. Some of my node compositing flows will eat up 80GB or RAM alone and that doesn't include GPU nodes which use VRAM. So in theory shared soldered memory is great if you have enough of it and that comes with the hefty price.

When it comes to high end gaming dedicated GPUs I can also make a case how they benefit in the area where Apple silicon blossoms. With my RTX I was able to debayer and play in real time all the codecs I have thrown at it at any recording format. I was not limited to ProRes only. Also I aint using Premiere or Resolve either. Resolve sucks at multiple GPU scaling anyhow.
 
I've got the base 1tb model with only 16 gb of ram so that doesnt explain it. All the base models like mine i've seen have a multi core score of 12000 or above so thats why i'm curious where macrumors came up with their numbers

🤷 you got me, then.

You running the latest version of Ventura?
 
Competition is not a bad thing. I, for one, am glad someone got Intel to start competing and making better, faster chips.




Battery Life? Well, it lasts forever when plugged in!



I am curious, because this is not the first comment like this, is this a political thing like you guys are being panelized for using more energy or you are just trying to be more green conscience?
Surely, you are aware of the massive energy price increases people in Europe are experiencing due to the war in Ukraine and the loss of Russian oil and gas? Prices all across Western Europe are up 100% - 1,000% or more for the past year (it fluctuates). People have serious worries about heating their homes because of the massive increase in cost, and they have no recourse.

While I’m not sure that a computer does use enough energy for this to have a real impact, I also don’t watch my usage that closely, as we are largely insulated here in the U.S from these effects (and my energy use is low as I have a small household).
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.