Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You know that the efficiency of these cores is substantial compared with x86, so I was not expecting a comment on heating!

Overall system it’ll be less, but it’s hard to isolate the CPU efficiency. These GB scores are essentially identical to my self built i7-12700k machine, and that processor runs at about 175W at max power draw. The M1 Max runs at 215W, and I’d expect an increase with the M2 Max.

There are certainly some savings since the M2 also includes the GPU, which means overall it will be a lower power system, but I’m not sure it’s a massive CPU efficiency difference.
 
The multi core benchmark doesn't sound right, the M2 Max has 2 more performance cores, they should give an advantage between 15 - 25% percent for multi core GeekBench scores, all other things unchanged.
I'm guessing its 2 more E cores to bring it in line with the quad-core entry level models
 
a 2-3 year cycle should be adopted this once a year new chip … what’s the point of resource wasted on such a minor update.
Apple's Mac processors are effectively on a 1.5 to 2 year cycle. The M1 (and its family) were released between Fall 2020 and WWDC 2022. M2 was released during WWDC 2022. So I don't expect M3 until the 1H of 2024.
 
People that needed a laptop for office/wfh stuff already bought the m1 one in 2020. Pro's that needed "power" already have the m1 pro's/max/ultra & studio stuf. Does Apple really thinks we replace our laptops on a yearly base, just like iPhones?

There's still people on usable but older models and every increase in performance Apple can deliver will push more of those people to upgrade. Apple also doesn't expect everyone to own the exact same generation of computer and all upgrade at the same time. Not everyone needs a machine at the exact same time.

The more frequent the updates the better as it means there is no need to wait for the next big release. Imagine if they only updated every 5 years and your machine broke, who wants to buy a 4 year old computer? Computers used to get spec bumps multiple times a year, updating every 1-2 years isn't some revolutionary breakneck pace.
 
Maybe it is the base model. How low can the base go?
Well an incremental performance improvement with this SoC rumor over M1 Max/Ultra along with Ram increased to 96GB/192GB sounds like all that is needed to provide a larger workstation AS Mac as an alternative to existing intel Mac Pro. Once this is accomplished when the better SoC's become available you can finally see some more frequent interval they update the Mac Pro compared to many years between updates since 2013 - 2019.
 
There's still people on usable but older models and every increase in performance Apple can deliver will push more of those people to upgrade. Apple also doesn't expect everyone to own the exact same generation of computer and all upgrade at the same time. Not everyone needs a machine at the exact same time.

The more frequent the updates the better as it means there is no need to wait for the next big release. Imagine if they only updated every 5 years and your machine broke, who wants to buy a 4 year old computer? Computers used to get spec bumps multiple times a year, updating every 1-2 years isn't some revolutionary breakneck pace.
With such a small improvement over the previous generation, apple will have to throw in a few goodies like Wi-Fi 6E or screen upgrades or encoder updates like av1 encode decode or even neural engine upgrades, or other sprinkles of improvement here and there. Because otherwise, the M1 Max for an office machine is already well more than enough, so much so to the point that it makes almost zero sense to spend the extra money on an m2 max when M1 Max more than suffices.
 
Well an incremental performance improvement with this SoC rumor over M1 Max/Ultra along with Ram increased to 96GB/192GB sounds like all that is needed to provide a larger workstation AS Mac as an alternative to existing intel Mac Pro. Once this is accomplished when the better SoC's become available you can finally see some more frequent interval they update the Mac Pro compared to many years between updates since 2013 - 2019.
The Intel Mac Pro can go up to 1.5TB of RAM. 192GB is nothing for those users
 
I just hope that, if it's hotter running than its M1 based predecessors, Apple adjusts the cooling systems to accomodate the difference. Apple, with M2 based Macs so far, has either kept the same exact chassis and CPU/SoC coolers (in the case of the 13-inch MacBook Pro) or has reduced cooling (in the case of the MacBook Air) and this is a hotter running SoC than the M1.

As it stands currently, the M1 Pro (at least the 10 CPU Core variants) and M1 Max were very clearly engineered for the 16-inch MacBook Pro chassis, cooling systems, and battery, first and foremost with the 14-inch MacBook Pro chassis being the second-class citizen for those same SoCs.

What I don't want is for the 16-inch MacBook Pro to run hotter and/or with worse battery life because Apple didn't want to change the cooling system at all (like they did with the M2 version of the 13-inch MacBook Pro when jumping from the M1 version).
 
  • Disagree
Reactions: jdb8167
What's legacy about HDMI? It's still widely used on Monitors and TVs. 8K TVs use them. HDMi 2.1 is higher bandwidth than Thunderbolt 4.


Ahh I see you like the 2016 MBP design. Make sense those **** computers throttled like hell due to being so thin.

Oh and guess what HDMI 2.1 can output 8k 60hz or 4k 120Hz. Thunderbolt 4 cannot do that. So don't call HDMI a legacy port. You have no idea what your talking about.
I use my MacBook pro everyday to run ML and AI models and I have no problem. Hdmi what? I see you still use cables hahaha what a loser
 
You do know that AMD will release 15 watts to 45 watts mobile ryzens
Running at the same performance level of M1 Max?

So, if you run Cinebench on this AMD chip, it'll run at the same wattage as the M1 Max? And not around 150 watts higher before it thermal throttles? (No need to reply because we know the answer.)
 
  • Like
Reactions: bobone
I use my MacBook pro everyday to run ML and AI models and I have no problem. Hdmi what? I see you still use cables hahaha what a loser
Ooo I see you are so bad at coding that your models are not optimized hahaha what a loser x 2
 
That's a pathetic 11% increase only, after more than an year release cycle. This is a worse increment than intel-macs upgrades. At least those were upgraded by more than 15%, as far as I remember.
Are you kidding me? People forget so quickly. Wow.

We had some year over year generations, such as the 15" MBP in mid-2014 and mid-2015, where they kept the same freaking Intel CPU because Intel's new process nodes were so delayed. It was the Intel Core i7-4870HQ and Intel Core i7-4980HQ used in BOTH. We also had other years where Macs would go a couple years between upgrades because Intel hadn't come out with anything better. At least this cadence is better, and the M1 Max was already insanely fast, so this is a decent improvement. Then we also had years where there was small improvement. Let's randomly take the Late 2015 iMac and mid 2017 iMac. One had the Intel Core i7-6700K and the other the Intel Core i7-7700K. Comparable generationally. The 7700K was 5.3% faster in single core and 4.7% faster in multicore. And these two machines were announced 1 year, 7 months, and 23 days apart.

The only thing "pathetic" here is the memory of the people on this forum, lol. We were suffering under Intel for ages guys. Never forget that. And with 2-4 hour usable battery life that was burning up our laps. Sure some generations had bigger improvements, but especially in the latter years in the 2010s, the YoY gains just weren't really there most of the time.
 
Looking at the figures in this article, I would be very surprised if they jumped from a 5nm process to a 3nm process with an 11% increase in performance, unless they put less emphasis on performance and more emphasis on battery life. But considering how good the battery life already was, that seems unlikely to me.

One possibility, since we don't see the GPU scores, is that they put more of the TDP budget into the GPU? That is one area where they are lagging behind a bit. On 3nm they could put a fairly beefy GPU into this thing.

Personally I'm holding out for the third generation of this thing. Software continues to port over to Apple Silicon and bugs are being worked out. I'm hoping by that generation that we might get 128GB of RAM so that I can upgrade to 64GB for a reasonable price, lol. Also hoping to see some better SSD prices so I can get 4TB. And we'll for sure be on 3nm by then and it's more likely they'll have a more resilient OLED or Face ID or something else newer. My 2019 MBP is slower and gets hot sometimes, but it's still plenty fast so I can't justify it yet. Although I'm going to use this new machine to replace my iMac and my MBP using a thunderbolt dock to easily take it on the go. I get annoyed with having two machines and this will actually save me money since they're so hopefully. Hopefully by then there are more third party 5K displays on the market too. I have a couple decent LG 4K displays on the side but want something good for my primary.
 
That's a pathetic 11% increase only, after more than an year release cycle. This is a worse increment than intel-macs upgrades. At least those were upgraded by more than 15%, as far as I remember.
Actually intel released many iterative "generations" of processors that only had small single digit % performance improvements, some as low as 5% and many others only gaining more performance through aggressive frequency and power increases (heat and battery anyone?). Intel was essentially stuck on their Skylake architecture for YEARS as they struggled to move from 14nm to 10nm. Now they have gotten a bit back on track. Ironically some of Apple Silicon's current meandering has to do with TSMC's node improvement schedule and scaling. Its not reasonable to expect more than a 10-15% increase in performance year to year or gen to gen within the same processor family- you will generally get larger performance bumps when new process nodes become available, or every few years when a major architectural design is released. You are not going to see the x86 to arm64 performance shift double over every year, you need to reset your expectations.
g\
 
It’s too early in the process to see these numbers as the gospel and make some sort of value judgment that the M2 Max is going to be a mediocre upgrade. This is a pre-release SoC and these are pre-production numbers. Why anyone here is complaining at pre-prod numbers is silly speculation right now.
 
The M1 is the biggest leap in the CPU game of the last 20 years. The combination of performance and battery life is just as amazing today as it was 2 years ago. It gave Apple a massive head start and they are in no rush towards 3nm. I don't see developers use Macs with M1 Pro/M1 Max chips and complain about performance, do you?
 
Last edited:
  • Like
Reactions: killawat and Gudi
I just want an M2 Mac Mini with more TB ports/busses and 32GB/64GB of RAM. I have an Intel NUC Phantom Canyon with 64GB of RAM w/ dGPU and it's faster than my 2018 Core i7 Desktop. I would love to see the Mini with similar specs for memory and better GPU.
 
Let’s not forget that the M2 GPU had a very significant increase in cores and performance. We are expecting something like 40 cores on the M2 Max.

Even if CPU perf is up by 10 to 15%, the update might be worth it for the GPU alone.
 
Do you believe that? What's more likely, that Apple uses N3 to make a slower CPU or that they use a horizontally scaler A16/M2 on the already relatively mature N5P? Not to mention that these scores are 100% in line with M2 performance (just add a P-cluster).

You are making up stuff I didn't write. It isn't slower. All the benchmarks here are faster than M1. So 'slower' isn't really an issue. What you are trying to present is "slower than it could have been". Well the M1 Ultra could have had more PCI-e outputs if Apple had put them. Woulda-coulda-shoulda isn't a factor of 'slower'.

The same architecture (IPC) attached to the same generational memory (and bandwidth) and run at about the same clocks is probably going to turn in the same scores. However, if it is smaller then they it cheaper to make (more dies per wafer). And if optimized for better power utilization, then better perf/watt.

N3 doesn't mandate that the clocks have to go faster or that the chip dies have to be "as big as possible" (i.e., throw an even bigger kitchen sink of stuff onto the integrate 'everything' die) .

woHjvqju8MDZL2uqbtHAxa-1200-80.jpg.webp




TSMC N3 is a bit of a dual edged sword. They are trying a bit to compensation for memory dropping off the density gain track by having FinFlex ( 3 different FinFet options available on a single die). That is a work-around , but the trade-off is that the wafer costs are even higher.

Apple isn't shipping "race to the bottom pricing' SoC , but their SoCs are not immune to price elasticity issues either. ( e.g, when inflation drives Mac prices higher... people complain. ). Apple SoCs have a high price but if production costs increase that will eat into the margins.

Just mastering N3 new complexities (where to use the 'flex' and where not and how to juggle the tradeoffs across the whole SoC die is going to be a hard enough problem without throwing "brand new micro architecture implementation issues at it at the same time. ) . I don't think it is likely Apple wants to throw maximum complexity at a first iteration N3 implementation. Same issues that AMD is running into are going to hit Apple also at smaller fab implementation nodes.

Apple's SoC implementations have tended throw higher than normal level of cache at trying to push the performance curve harder with lower overall power consumption. If memory detaches from the same density trend curve that the compute logic is coupled to .. that gets harder to do while keeping the die costs about the same. (implementation area doesn't shrink, but the wafer area costs more).

The Pro sized die is under less pressure than the Max sized die , but if Apple is keeping those two coupled at the fab process implementation. A smaller Pro size die isn't going to 'hurt' getting high volume out of fewer wafers either.


Back when Intel was closely following the tick/tock improvements were steady and chips came out roughly "on time". Intel threw everything and the kitchen sink at Xeon Max ( sapphire rapids) and it is late. From an engineering project risk management perspective those two outcomes are really not all that surprising.
Apple has timely implementation issues to manage also.


To me it's disappointing because I want three things. First, a split between consumer and prosumer desktop hardware.

You are mixing up two things here. Consumer (laptop) and prosumer desktop. The consumer desktop Mac is going to be a high performance laptop SoC. Really was that way on Intel (x86) so not sure why there would be an expectation that it would change for the M-series. The Mini and new iMac 24" enclosures are not big boxes with very high airflow throughput. For a long time the Mini was relatively crippled because the CPU cores took up such a high share of the overall TDP budget that you couldn't put any substantive GPU in there. So the consumer desktop Mac and consumer laptop Mac are likely going to remain coupled.

The higher end desktop hardware CPU cores are likely not going to get decoupled from laptop/mobile origins. That is the volume that pays the R&D bills. Basically the same thing for the GPU cores.

The Mac Studio backs off the "thinnest" enclosure constraints, but not by a huge amount.

There probably be a splt on implementation when the larger laptop CPU and GPU cores drive a bigger disconnect with demands for a larger L3/System Level cache that is going to stop scaling as well. Probably going to bring a different disaggregation implementation ( e.g., AMD moving memory controllers and cache off central die) than completely different core design/implementation.


Second, new generation of P-cores (so far we had three generations of Apple Silicon on what is essentially the same P-core frontend and backend, with just few incremental tweaks and new instructions).

If jumping up and down to stay on the older N5 family implementation why would that change? If try to go to a much larger P-core complex implementation then production costs go up (consume more die area ). But wouldn't be able to move the GPU performance forward with more cores there. E cores got more focused improvements which have lower die area consumption impact.

And when P-core improvements come there is pretty good chance it would be more weighted toward AMX/SIMD impact that into hot rod single thread drag racing. (more compute on tightly structured and/or clustered data than on even bigger caches and even deeper speculative execution. )


Third, competition with x86 in the desktop segment. These M2 Max scores are enough to secure a lead among laptops in first half of 2023, but Apple is getting outgunned in the desktop space.

70+ % of what Apple sells is laptops. So winning where they sell the most. Consumer desktops. Not really outgunned there either ( average Windows desktop isn't shipping with a mid-upper range dGPU in it ). So even smaller gap area. Those are pretty much where the "M2 Pro" and plain M2 would be going.


in the upper end of desktop space , Apple is more so being 'outgunned' by software than hardware. macOS is capped at 64 threads. Where AMD/Intel are 100+ threads to high end desktops. There are zero 3rd party GPU drivers on macOS on M-series. macOS on Intel dropped Nvidia, but Apple has dropped everyone else . That is a bigger 'hole' than hardware. More than one CPU or GPU package to the gunfight will lead to being outgunned. Apple bring one SoC (gun) and the other options bring 2-4 xGPU packages to the fight.
 
  • Like
Reactions: Athonline and slx
They decided not to release a Mac Pro and wait for M2 is an indication of that they didn't like the results of the Ultra.
No it is not. There are many reasons to delay a new MP release. I doubt that Apple ever seriously thought that an Ultra chip built on 5nm process was going to be the solution to their need for a hot new MP. Odds are that all along Apple knew that 3nm was necessary to give enough transistors to make a statement.
 
Overall system it’ll be less, but it’s hard to isolate the CPU efficiency. These GB scores are essentially identical to my self built i7-12700k machine, and that processor runs at about 175W at max power draw. The M1 Max runs at 215W, and I’d expect an increase with the M2 Max.

There are certainly some savings since the M2 also includes the GPU, which means overall it will be a lower power system, but I’m not sure it’s a massive CPU efficiency difference.
If M1 Max was using 215W it would burn. There is no cooling capacity for that.
There is a hard cap at 45W if CPU is under max load for both Macbook Pros and Studio. But there is hardly load that you can do to push it to that limit.

Here is what I have measured on M1 Max when pushing it to the limit:
High power profile,
User interface lagging, fan at 5000+
ANE Power: 0 mW
DRAM Power: 1380 mW
CPU Power: 41449 mW
GPU Power: 40006 mW
Package Power: 87174 mW

Comparing it to intel is just ridiculous. Why do you think all intel based notebooks have 50% less power when unplugged? Because there is no chance in hell that battery could deliver so much power without severely damaging itself, possibly burn or explode.

At this point, M1 Pro/Max notebooks are the best you can buy on the market. M2 is not going to change much. We have to wait what Qualcomm is cooking. I am expecting great things from them soon. Intel best days are gone.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.