Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I agree, of course wait for actual real world usage to do a true comparison and not rely on the marketing, which is often misleading at best.

But, if AMD's claims are even kind of true, this makes Apple look bad, imo. Apple was the top brand for their AS laptop performance, especially when it came to battery life, if AMD is getting close to Apple's battery life while maintaining as high or higher performance, it just doesn't look good.
You do realise, it's a comparison between not the most powerful Apple chip, which is over a year old, on Cinebench, which is optimised for x86 cores, not ARM?
 
  • Like
Reactions: Mescagnus
Don’t think I’ve heard AMD mention Apple M-series chipsets yet. They’re all I use on the Windows side, they don’t make things confusing with their board sockets (unlike Intel) and aren’t power drinkers.

I’d be interested to see how the new Ryzen 7000 does stack up, but I won’t be getting one for a while. Picked up a 5800X when they were on sale in October, so I’m set. Getting a new Mac will come first too.
 
Yep, the whole macOS platform is kind of useless, except for video editing (encoding), iOS/macOS dev and daily consumer tasks. 3D Software on macOS runs very unstable, Maya crashes all the time, Blender is not optimized, 3DMax does not even exist. No decent CAD Apps like Solidworks, ProE, CATIA, etc. exists.
Audio wise it’s s also a deception, FL Studio and Ableton runs worse than on Windows, many nice VST plugs does not exist for Apple Silicon or runs bad, etc.

I hope this AMD news is true, because the only advantage Apple Silicon has is the Speed to Power consumption/runtime ratio.

And I don’t even want to get into Gaming or other graphical features (shaders) where macOS and Apple Silicon just sucks.
Linux, according to stackoverflow survey (actually I am not 100% sure), just surpasses macOS to be the more favoured platform to develop software on. macOS is really mostly just for dedicated use cases or general light use. Safari is terrible. Entire OS now looks more and more like iPadOS. Need to restart several times just to get my iPad syncing with Mac again. And I don’t even want to Mention how bad macOS has been in terms of stability compared to what I know before: 3 systemwide freezes in the past 2 months. Doesnt sound like a lot? My Windows 10 sitting right next to it has had experienced no systemwide freeze or BSOD for several months (I think at least past 5 months).

Gaming? Had they not effectively shutdown iOS app on macOS feature back in Big Sur days, gaming on macOS would’ve been great. Now, it’s just as bad.

I Hope AMD and Intel both can produce amazing mobile chips with great battery life. My laptop is due for an upgrade, and I want something a bit more powerful.

I’m not saying M1 MacBook doesnt have its own advantage. The chip is so efficient that I have 3 external drives, two of which are HDD, plugged in, while I attempt to do a full backup with only battery power. And it works! From 80% to 25%. Had this happen on a Windows PC, the battery probably is dead before all data is backed Up.

We will see In the coming years who will take the lead.
 
fans still going to be blowing 100% during idle
Amd’s phoenix point CPUs are actually built on tsmc 4nm, a more advanced node than m1 and it’s siblings. Amd isn’t f’n around it seems. Well unless you consider the rdna3 vapor chamber debacle. But that’s another story for another day.
 
  • Like
  • Disagree
Reactions: MikhailT and xnu
Being old enough to remember the G4 fiasco..... I have two thoughts here.
1) AMD is simply playing catch up, which is great. Q1 2023 to finally make a comparison to a Q4 2020 chip. Better late than never.
2) AMD leapfrogs Apple's CURRENT attempts and we watch for 18 months as the G4 M2/M3 is stuck.
 
It's like saying one EV has more range than another. Without referencing the size of the battery pack it's hard to draw too many conclusions about efficiency. Many modern laptops use displays that can draw 8 - 12 watts. Given a typical laptop battery size 30 hours is challenging even with no CPU inside. Obviously you can do it with design trade-offs, like a small and dim screen, much larger battery, etc.

I'm sure this is going to be a great product and the efficiency will serve them well in servers and mobile compared to Intel's offerings. However, x86 has lost the efficiency game against Arm and RISC-V particularly in the important data center. It's a legacy product on borrowed time existing only to run instances of Windows Server and Linux x86 for customers who can't port. This is important because without the data center revenues client alone won't be enough to fund the next round of development.
I do agree with you on the point that we should be talking about performance per watt. That is the most important metric when comparing CPU performance. AMD seems to have been able to achieve something quite remarkable but before seeing Geekbench or real-world performance comparisons with optimized applications, it is difficult to say much more.

What you say about modern RISC vs. ancient CISC architectures is, of course, true. The only reason why x86 is still alive is legacy. CISC architectures suffer from difficulties in pipelining which makes them more difficult from the silicon point of view. All highly efficient embedded architectures are RISC architectures with relatively simple and orthogonal instruction sets.

However, that may not be the whole truth with CPUs. While x86-64 is a CISC instruction set, it is not as "CISCy" the original 8/16-bit 8086 instruction set. Instead, many of the most common instructions are similar to RISC instructions without significant decoding overhead. Real-world performance bottlenecks are often related with very specific calculation tasks — matrix multiplications, neural network operations, video decoding/encoding — which should, in any case, be offloaded to specific accelerator hardware. The impact of generic CPU performance may not be that big on the total system performance, as the heavy lifting is not done by using the native CPU instructions.

There are also different levels of real-time compilation of the instruction code to something lower-level (microcode). The unavoidable overhead between x64 and RISC comes from this compilation step which is performed in different phases. It may be that the silicon area required for this is not large enough to make CISC so much less efficient that it would render x64 obsolete. I remember a study published maybe five years ago stating that decoding consumes 10 % or less of the total power. Some of that decoding happens also in modern ARM processors, so the architectural power efficiency difference may be relatively small, after all.

The game is becoming quite interesting. It seems that Intel's good'ol' "pump more power in it" may not be the winning strategy, after all. Go, AMD, go! Go, Apple, go! I'm getting my popcorn and cola.
 
Being old enough to remember the G4 fiasco..... I have two thoughts here.
1) AMD is simply playing catch up, which is great. Q1 2023 to finally make a comparison to a Q4 2020 chip. Better late than never.
2) AMD leapfrogs Apple's CURRENT attempts and we watch for 18 months as the G4 M2/M3 is stuck.
Apple has been very succesful in producing new generations of CPUs and SoCs for its portable devices. It would be surprising if it managed to fail Intel-style. (Intel was the number one in semicon manufacturing and processor design. Then they decided to forget about the manufacturing. Tick-tock became tick...–ummm–tock?–tock–orwasittick?)

The situation with G4 was completely different.
 
Linux, according to stackoverflow survey (actually I am not 100% sure), just surpasses macOS to be the more favoured platform to develop software on. macOS is really mostly just for dedicated use cases or general light use. Safari is terrible. Entire OS now looks more and more like iPadOS. Need to restart several times just to get my iPad syncing with Mac again. And I don’t even want to Mention how bad macOS has been in terms of stability compared to what I know before: 3 systemwide freezes in the past 2 months. Doesnt sound like a lot? My Windows 10 sitting right next to it has had experienced no systemwide freeze or BSOD for several months (I think at least past 5 months).

Gaming? Had they not effectively shutdown iOS app on macOS feature back in Big Sur days, gaming on macOS would’ve been great. Now, it’s just as bad.

I Hope AMD and Intel both can produce amazing mobile chips with great battery life. My laptop is due for an upgrade, and I want something a bit more powerful.

I’m not saying M1 MacBook doesnt have its own advantage. The chip is so efficient that I have 3 external drives, two of which are HDD, plugged in, while I attempt to do a full backup with only battery power. And it works! From 80% to 25%. Had this happen on a Windows PC, the battery probably is dead before all data is backed Up.

We will see In the coming years who will take the lead.
You must have the only MacBook in the world that freezes more often than a Windows lap top 🤣
 
30% faster at a cherry picked task while not normalizing for power is meaningless. Similar to how Intel likes to boast about amazing peak performance which can be sustained for a ms. Notice that despite all these claims about superior efficiency, by both x86 vendors, we don't see many fanless x86 thin and light notebooks that can edit video as well as a MBA. We also don't see x86 equivalents to an M1 Ultra that can fit into a Mac Studio sized case with similar acoustic profiles.
And you are cherry picking stuff as well.

The defense of Apple/M series is always “while on battery” or “quieter or no fan noise”.

That is all great but I have seen enough tests using real world scenarios, like rendering video in various products like Premiere or Resolve that show Intel desktop machines out performing a fully loaded Studio.
 
  • Love
Reactions: compwiz1202
Do you really believe this is a competition? Look on Google for: The top 10 owners of "Apple" "Intel" "AMD" "NVIDIA" - the same smart guys who simulate competition and give us piecemeal technology. Cheap theater, just like the world we live in.
 
Windows for Arm is still an absolute sh*t show. Don't expect a quick transition like Apple. Such a shame too because these new chips sound promising.
 
Yep, the whole macOS platform is kind of useless, except for video editing (encoding), iOS/macOS dev and daily consumer tasks. 3D Software on macOS runs very unstable, Maya crashes all the time, Blender is not optimized, 3DMax does not even exist. No decent CAD Apps like Solidworks, ProE, CATIA, etc. exists.
Audio wise it’s s also a deception, FL Studio and Ableton runs worse than on Windows, many nice VST plugs does not exist for Apple Silicon or runs bad, etc.

I hope this AMD news is true, because the only advantage Apple Silicon has is the Speed to Power consumption/runtime ratio.

And I don’t even want to get into Gaming or other graphical features (shaders) where macOS and Apple Silicon just sucks.
People like you have been living in hope and deluding themselves for 2 years 🤣
 
Windows for Arm is still an absolute sh*t show. Don't expect a quick transition like Apple. Such a shame too because these new chips sound promising.
These new chips are still x86, which is dying slowly and by the time these chips come out, it will probably be dead 🤣
 
AMD: "HEY YOU GUYZZZZ!!! We caught up with last year's Mid Range chip just in time for Apple to release their next iteration negating our marketing! Our next chip on the roadmap is two years away, but Apple will release 2 desktop and 2 portable chips in that time frame for various products."
 
In many ways the M Series/AMD comparison is pointless - you’ll never see an apple chip in a windows pc, or an AMD in a Mac, so these aren’t pound for pound speed tests.
 
As long as it plays nicely with the rest of the system… I use Unity daily, myself on a Mx Mac (M1 Ultra as of December) and colleagues of our small team are on windows.

They do have the latest Intel something with the latest RTX something-else accesible to them at the time (3080s some of them I believe).

It is true that the game in-editor runs faster fps wise (200+ vs ~180)… however, opening the project takes like 5mins (vs barely 30 seconds on M1), hitting play and stop is met with a progress bar that can stay a minute there (on Mac is a beach ball for a few seconds) and many other things. And that’s with annoying performance bugs to fix on macOS by Unity (which to their credit they have been tackling as they come).

I don’t doubt that the raw unhinged performance is there, shown by the numbers, but in my experience the round trip as a whole feels slower in general always… In Blender, we just don’t hit render over and over again: the mesh has to go to edit mode back and forth, switch between texturing/shading/etc tabs, switch viewport render modes, animate, physics sim/playback, etc.
I’m no Blender expert but the simple things I have had to do and the chance try on iMac 2020s 5700XT vs M1 Pro 14” was night and day the difference, nevermind M1 Ultra.

Still, I hold AMD in high regards CPU wise though (not GPU drivers for that matter… I lost a lot of youth days there)
 
In many ways the M Series/AMD comparison is pointless - you’ll never see an apple chip in a windows pc, or an AMD in a Mac, so these aren’t pound for pound speed tests.
Some tasks are interchangeable though. For example, I remember being impressed with my M1 MBA clocking 9 hours of zoom on a single full charge, while still remaining cool to the touch. Something no windows laptop could match back in 2020 when the M1 chip was first unveiled.
 
  • Like
Reactions: hxlover904
Looks like AMD is pulling an Intel and picking targeted benchmarks in niche areas where Apple is weak.

The AMD chips have hardware raytracing support for Blender which M1 chips lack. Apple chips are rumored to get this in the future, but it has been reportedly delayed as Apple is still working on the features power efficiency.

Fixed pipeline stuff like video playback power efficiency comes down to the process node where they are one step newer then the older M1.

The older M1 Pro still blows the AMD chip out of the water for traditional CPU tasks with its insane amount of out-of-order parallelism.

GPU tasks will be a mixed bag of strengths and weaknesses since Apple optimizes their GPU significantly differently (a mobile GPU on steroids) then PC vendors that makes it hard to do a fair comparison. Apple’s GPU choices could one day cause them to win the mobile (Switch like) gaming war (if they ever choose to compete) and AR headsets/glasses that would benefit from Apple’s GPU design.
 
Last edited:
I saw the AMD CEO at an Asian grocery store here in Austin, I took quick pic of her standing there while her husband bagged the groceries.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.