That’s because the M2 has better single core performance than the M1 Pro.My M2 MacBook Air is probably the fastest Mac I've ever owned. For some reason, it feels snappier than my 16" M1 Pro.
That’s because the M2 has better single core performance than the M1 Pro.My M2 MacBook Air is probably the fastest Mac I've ever owned. For some reason, it feels snappier than my 16" M1 Pro.
Like I said, nothing terrible, but I'm hoping Apple has significant design improvements up their sleeve for the A17 or so.
I’m thinking of replacing my son’s 2014 iMac 5k 27” with the M2Pro Mini. Would there be a noticeable improvement on the user experience front (apart from ditching the need to vacuum the ventilation slots weekly)?
I’d need a suitable monitor as well, or course. Any suggestions? Gotta be fit for gaming, too.
If the difference in cost between the M2Pro and M2, specked out the same for ram and SSD size, is it worth the $300, You get 10 core cpu 16 core gpu vs 8 core cpu 10 core gpu, 2 extra Thunderbolt ports, HDMI 2.1 support. Wondering what everyones thought on this is? Thx
But it doesn't.A lot of people on here are saying that, however how come the M1 Max is faster by 2000 on multicore, than the M1 Pro?
On synthetic benchmarks, yes, in real-world use not so much. I ran a bunch of timed tests of the typical tasks I do in Adobe Photoshop and Lightroom and even with a Nvidia RTX 6000 that has huge compute scores on Geekbench, and a processor that matched the single-core and had higher multicore scores, my Mac Studio Ultra actually destroyed the PC. There's so much more to day to day use and application performance; these synthetic benchmarks are perhaps good for comparing generation to generation on the same platform, but clearly do not work comparing across platforms and do not reflect real-world application performance.Scores are well and good, but it's still just integrated graphics at the end of the day..
13950hx cpu (24 cores) with rtx 4090 (9728 cores) gpu laptops will smoke these utterly and completely..
Yeah Intel's Cinebench scores are great and all but I simply cannot go back to loud, hot laptops with poor battery life.How is the battery life, fan noise and heat on those 13th gen Intel laptops?
And what about the performance when unplugged?
I have a top-spec 12th-gen Intel laptop (with 8GB Nvidia RTX A2000 graphics) on the way for someone at work that uses an application that isn't available for MacOS. I'm planning on doing some benchmarking and I'll toss that into the batch. I suspect that even with the high GB scores it won't match up in real-world use to the 16" M1 Max, and on top of that it weighs 3lbs more and no doubt will be hot and sound like a jet engine. I mean, Lenovo even has this line on the webpage "...redesigned vents send hot air out the back of the system, not the sides." My M1 Max has never once ramped up the fans to where I can hear them, and has never felt more than slightly warm, never hot.Yeah Intel's Cinebench scores are great and all but I simply cannot go back to loud, hot laptops with poor battery life.
Agreed, benchmarks don't mean a whole lot.On synthetic benchmarks, yes, in real-world use not so much. I ran a bunch of timed tests of the typical tasks I do in Adobe Photoshop and Lightroom and even with a Nvidia RTX 6000 that has huge compute scores on Geekbench, and a processor that matched the single-core and had higher multicore scores, my Mac Studio Ultra actually destroyed the PC. There's so much more to day to day use and application performance; these synthetic benchmarks are perhaps good for comparing generation to generation on the same platform, but clearly do not work comparing across platforms and do not reflect real-world application performance.
Are you familiar with the term "empty comparison?" There's literally no data on these graphs. The only "creator" app I see is DaVinci Resolve and again, there's no baseline besides saying it's a certain percentage faster than the previous model of the same laptop. That's entirely useless as a measure of performance unless all you care about is those two laptop models. I mean even his followers on YouTube are skewering him for this. As an aside, this is what I hate about the death of real technology media and the rise of the "YouTube expert." Anyone with a camera and enough following becomes an expert, no matter how garbage their methodology is.Agreed, benchmarks don't mean a whole lot.
But on the 4090 laptop gpu they are quoting 2.5x speedup on creator apps verses their previous best dedicated laptop gpu the 3090 ti, https://wccftech.com/nvidia-geforce...-3-2-4x-gain-in-creation-apps-versus-3080-ti/
Unless there are software issues, I have no idea how this wouldn't blow M2 gpu out of the water.. Both intel and nvida are really stepping up their game thanks to Apple and AMD.
The performance differences between the 40xx architecture and 30xx Nvidia architecture are well known and tested at this point. The 40xx really is much faster in terms of fps for games, and there’s plenty of benchmarks to for the creator Apps. There are plenty of benchmarks out there: https://www.pugetsystems.com/labs/a...e-rtx-4090-24gb-content-creation-review-2374/ for example that show it is a massive upgrade to the 30xx.Are you familiar with the term "empty comparison?" There's literally no data on these graphs. The only "creator" app I see is DaVinci Resolve and again, there's no baseline besides saying it's a certain percentage faster than the previous model of the same laptop. That's entirely useless as a measure of performance unless all you care about is those two laptop models. I mean even his followers on YouTube are skewering him for this. As an aside, this is what I hate about the death of real technology media and the rise of the "YouTube expert." Anyone with a camera and enough following becomes an expert, no matter how garbage their methodology is.
You also clearly don't understand the huge advantage the unified memory architecture gives GPU performance on Apple Silicon--with over 6x the bandwidth of even PCIe 5 16x, the GPU cores can constantly be cranking.
LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.The performance differences between the 40xx architecture and 30xx Nvidia architecture are well known and tested at this point. The 40xx really is much faster in terms of fps for games, and there’s plenty of benchmarks to for the creator Apps. There are plenty of benchmarks out there: https://www.pugetsystems.com/labs/a...e-rtx-4090-24gb-content-creation-review-2374/ for example that show it is a massive upgrade to the 30xx.
Me thinks you might be a little too caught up in the reality distortion field to see clearly if you think that integrated graphics compares to top of the line discreet graphics chips.
Well, I'm talking about the laptop 4090 which runs in well a laptop, not the 3 card slot variation. I can see you are passionate about this subject. But man I don't get what magic do you think is in the Apple SOC that means it can defy physics? The Ultra has what 800 GB/s bandwidth which is great. But the SOC gets so big with the ultra that it doesn't even seem to scale like it should. It overheats, and it doesn't scale. There's only so big that SOC can get. That's why Apple abandoned the Extreme. The 4090 has 1008GB/s bandwidth. And 24 GB GDDR6X with 386 bit interface. I mean it costs like over $1500 (the desktop card) and the mobile one won't be cheap. Are you really saying there's any planet where Apple's version of integrated graphics compares? Also the AMD 7040 is coming out and should have comparable gpu (RDNA3). I don't think we know the specs, but I'm sure it will be very comparable to the M2.LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.
Absolutely no one has claimed that Apple can match a 4090--nor are they even *trying* to do so in these machines. To start with it's huge and uses a huge amount of power and cooling: to quote from the Puget article "Beyond NVLink, another concern we have about the RTX 4090 is simply how much power it demands (and how much heat output that will translate to). The physical design of the card is going to make using more than two RTX 4090 cards impossible without liquid cooling, but even then, you will find yourself to be power limited very quickly." I mean, a 4090 is about the size of an entire Mac Mini and consumes vastly more power just by itself, without the whole rest of the computer. Assuming Apple creates a discrete GPU for something like the Apple Silicon Mac Pro, then we can have this discussion. That you're bringing up here shows you're trying to tilt the deck.
The majority of the gains in the new Nvidia GPUs is related to rendering--great for games, great for 3D rendering apps and some video editing. Heck, I have an A6000 I use for that myself--its fabulous for 3D rendering and worth the price. But that comes at a cost of massive size, power requirements, heat and a pretty heft price as well for basically a niche use. For general purpose use and things like the Adobe Suite, that's all basically wasted power as it's not needed at all. That's all "content creation" as well, so the article link is a misnomer.
My personal experience using Windows workstations with previous-generation RTX4000, 5000 and 6000 series cards and using the new A6000 card is that while there are plenty of advancements there it's mostly entirely useless to the majority of applications outside of games and 3D rendering (and on the video side, DaVinci Resolve does make good use of it, but not all video apps do). I haven't see significant performance gains on the PC side for Lightroom and Photoshop in years; heck, even Puget recommends an old 3060 series for those uses. On the other hand, the performance difference in Lightroom and Photoshop is very noticeable and easy to benchmark. I have a whole series of graphs over on a photography-centric site that show just how fast a Mac Studio Max or Ultra is compared to some extremely powerful PC CPU/GPU combinations.
The Apple Silicon versions of Photoshop and Lightroom are both optimized quite well and leverage the Neural engines very effectively in addition to the GPU cores. Remember, unlike a discrete GPU and CPU this is all happening in very high speed pooled memory with a ton more bandwidth. That has a measurable effect.
You keep trying to put words in my mouth. Where did I claim it would defy physics? You post a benchmark comparing a desktop 4090, then switch to try and talk about a laptop version. You can't keep moving your goalposts. Show me proof the Ultra overheats? Because it doesnt. I have one. I can stress it hard and the fans never even spin up. In fact there's a general argument Apple could have pushed it harder and let it get a little hotter to get more performance.Well, I'm talking about the laptop 4090 which runs in well a laptop, not the 3 card slot variation. I can see you are passionate about this subject. But man I don't get what magic do you think is in the Apple SOC that means it can defy physics? The Ultra has what 800 GB/s bandwidth which is great. But the SOC gets so big with the ultra that it doesn't even seem to scale like it should. It overheats, and it doesn't scale. There's only so big that SOC can get. That's why Apple abandoned the Extreme. The 4090 has 1008GB/s bandwidth. And 24 GB GDDR6X with 386 bit interface. I mean it costs like over $1500 (the desktop card) and the mobile one won't be cheap. Are you really saying there's any planet where Apple's version of integrated graphics compares? Also the AMD 7040 is coming out and should have comparable gpu (RDNA3). I don't think we know the specs, but I'm sure it will be very comparable to the M2.
AI is a whole nother ballgame, but NVIDIA is the AI hardware leader, so I'm not sure why you think the Apple chips would have better AI than NVIDIA lol. Again, I can only assume you are on the Apple marketing team or something. Anyway, I'm done responding here.
Apologies if this has been covered in the previous 10 pages but I’m not going to read the whole thread.
Just looking at the geekbench browser these results seem really impressive to me.
Based on what’s currently listed there, this max mini would be top on single core
and 5th on multicore. Beaten only by some configs of Mac Pro and one config of Mac Studio.
I've got the base 1tb model with only 16 gb of ram so that doesnt explain it. All the base models like mine i've seen have a multi core score of 12000 or above so thats why i'm curious where macrumors came up with their numbersProbably a difference between the same laptop with different configs!
👍
I have a top-spec 12th-gen Intel laptop (with 8GB Nvidia RTX A2000 graphics) on the way for someone at work that uses an application that isn't available for MacOS. I'm planning on doing some benchmarking and I'll toss that into the batch. I suspect that even with the high GB scores it won't match up in real-world use to the 16" M1 Max, and on top of that it weighs 3lbs more and no doubt will be hot and sound like a jet engine. I mean, Lenovo even has this line on the webpage "...redesigned vents send hot air out the back of the system, not the sides." My M1 Max has never once ramped up the fans to where I can hear them, and has never felt more than slightly warm, never hot.
LOL. No, you clearly don't understand that the Apple Silicon architecture is not like an embedded GPU in the x86 world; the memory bandwidth is 10-20x faster and the amount of memory available is huge in comparison. You are trying to compare apples and oranges--and the benchmarks you just cited are not capable of being run on Apple Silicon, so again, it's a pointless comparison and reveals how little you actually know about how these things work and what their actual specs and capabilities are.
Absolutely no one has claimed that Apple can match a 4090--nor are they even *trying* to do so in these machines. To start with it's huge and uses a huge amount of power and cooling: to quote from the Puget article "Beyond NVLink, another concern we have about the RTX 4090 is simply how much power it demands (and how much heat output that will translate to). The physical design of the card is going to make using more than two RTX 4090 cards impossible without liquid cooling, but even then, you will find yourself to be power limited very quickly." I mean, a 4090 is about the size of an entire Mac Mini and consumes vastly more power just by itself, without the whole rest of the computer. Assuming Apple creates a discrete GPU for something like the Apple Silicon Mac Pro, then we can have this discussion. That you're bringing up here shows you're trying to tilt the deck.
The majority of the gains in the new Nvidia GPUs is related to rendering--great for games, great for 3D rendering apps and some video editing. Heck, I have an A6000 I use for that myself--its fabulous for 3D rendering and worth the price. But that comes at a cost of massive size, power requirements, heat and a pretty heft price as well for basically a niche use. For general purpose use and things like the Adobe Suite, that's all basically wasted power as it's not needed at all. That's all "content creation" as well, so the article link is a misnomer.
My personal experience using Windows workstations with previous-generation RTX4000, 5000 and 6000 series cards and using the new A6000 card is that while there are plenty of advancements there it's mostly entirely useless to the majority of applications outside of games and 3D rendering (and on the video side, DaVinci Resolve does make good use of it, but not all video apps do). I haven't see significant performance gains on the PC side for Lightroom and Photoshop in years; heck, even Puget recommends an old 3060 series for those uses. On the other hand, the performance difference in Lightroom and Photoshop is very noticeable and easy to benchmark. I have a whole series of graphs over on a photography-centric site that show just how fast a Mac Studio Max or Ultra is compared to some extremely powerful PC CPU/GPU combinations.
The Apple Silicon versions of Photoshop and Lightroom are both optimized quite well and leverage the Neural engines very effectively in addition to the GPU cores. Remember, unlike a discrete GPU and CPU this is all happening in very high speed pooled memory with a ton more bandwidth. That has a measurable effect.
I've got the base 1tb model with only 16 gb of ram so that doesnt explain it. All the base models like mine i've seen have a multi core score of 12000 or above so thats why i'm curious where macrumors came up with their numbers
Hmm, I can see a studio with max and ultra beside a Mac Pro with expandability as us.My predictions are a m3 Mac Pro and next year a studio. Updating the studio now would conflict with sales for a Mac Pro.
Surely, you are aware of the massive energy price increases people in Europe are experiencing due to the war in Ukraine and the loss of Russian oil and gas? Prices all across Western Europe are up 100% - 1,000% or more for the past year (it fluctuates). People have serious worries about heating their homes because of the massive increase in cost, and they have no recourse.Competition is not a bad thing. I, for one, am glad someone got Intel to start competing and making better, faster chips.
Battery Life? Well, it lasts forever when plugged in!
I am curious, because this is not the first comment like this, is this a political thing like you guys are being panelized for using more energy or you are just trying to be more green conscience?