Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The sad fact is that the M2Pro chip is actually slightly faster in single core speed than the M3Pro chip

That's not true.



Granted, the memory bus is slower, and there are fewer p-cores. But individual cores are faster, so overall, single-core will still be 17% faster, and multi-core 7%. That's not much, but it's not nothing.

(Compute is slower, though. By less than the margin of error.)

 
  • Like
Reactions: NetMage
Apple Super Fan Benchmark Comment Breakdown

Competitors benchmark scores are lower than Apple's: Haha, so and so can't touch Apple, just look at the benchmark scores! Seriously, it is all about having a higher benchmark score!!!

Competitors benchmark scores are higher than Apple's: Benchmark scores don't matter! Let's see how it performs in real world use!!
 
It doesn't matter what is faster than an M chip or a snapdragon chip because the only OS that will run on an M chip is apples stuff. Snapdragon is going to run Android about as fast as anything unless googles chip gets its head out of its ass. This article makes silly comparisons.
 
80W to compete with M3?
M3 is like 20W when under heavy load IIRC :D
80W is estimated TDP of M3 Max
That is exactly the problem with Aooke Silicone. The M3 is a laptop chip. It will seriously underperform on desktops where power usage does not matter. It is very easy to find an ARM chip that blows away an M3.

Apple needs to make something that is desktop-specific to go into the Mac Pro and Mac Studio. More than a few creative users are moving to Windows to gain performance over the M3. A high-end Xeon processor paired with Nvidia GPU completely blows away the M3. But yes, it uses way more power, but then desktop PCs have full-time access to AC mains power.
 
  • Haha
Reactions: NetMage
Apple's chip advantage is dwindling with the new efforts by Qualcomm and AMD and Intel.
 
That is exactly the problem with Aooke Silicone. The M3 is a laptop chip. It will seriously underperform on desktops where power usage does not matter. It is very easy to find an ARM chip that blows away an M3.

Apple needs to make something that is desktop-specific to go into the Mac Pro and Mac Studio. More than a few creative users are moving to Windows to gain performance over the M3. A high-end Xeon processor paired with Nvidia GPU completely blows away the M3. But yes, it uses way more power, but then desktop PCs have full-time access to AC mains power.
Power usage does matter on desktops as well as laptops. Power=heat, heat=energy costs on cooling and heat=wasted energy. Are you saying you’d rather run a 200W power supply 24/7 over a 100W power supply? Power usage most certainly matters, regardless of who you are or what hardware you’re using.
 
so a mac with a virtualization software running Windows on ARM is supposed to be better than ARM running Windows directly?
dunno what you have been smoking but please do share
also Windows 11 on ARM has translation feature for running x86/x64 software just like Mac has Rosetta (although the Windows translation is much slower)
It's not better, it's just that on Windows on ARM there are barely any apps. It's like buying a SurfaceRT all over again (and I have a SurfaceRT!)

If you virtualise on a Mac you have your personal benefits of running Windows together with the swathe of Mac Apps that run perfectly well on an M1 machine.

An M1 MBA refurb is £$600. Unless you need games or a Windows specific app for work (eg Autodesk Revit) you'd be better off buying the Mac in nearly every situation.
 
That is exactly the problem with Aooke Silicone. The M3 is a laptop chip. It will seriously underperform on desktops where power usage does not matter. It is very easy to find an ARM chip that blows away an M3.

The Snapdragon X Elite is a laptop chip as well (and at the low end, too boot; it can't compete with M3 Max), so that discussion seems off-topic to me.


Apple's chip advantage is dwindling with the new efforts by Qualcomm and AMD and Intel.

I don't know about AMD, but the Intel Meteor Lake results I've seen are… quite underwhelming. Slower than some Raptor Lake mobile chips, even.

It's not better, it's just that on Windows on ARM there are barely any apps.

Ehhhh. It's a lot better than in the Windows RT days. VS runs on ARM, modern .NET does, most of Office does.
 
  • Like
Reactions: kc9hzn
Apple's chip advantage is dwindling with the new efforts by Qualcomm and AMD and Intel.
That's expected. You can tool away at something for decades making little progress because when you look around you assume the rut everyone is in is fundamental-- until someone breaks out and shows there's a different way to think (pun intended).

These are all phenomenal design companies, once they know what the new target is I'd expect them to close the gap quickly. Intel is the interesting one to me-- they're in a no win situation. x86 is their reason for being-- if they keep insisting x86 is the future they're hamstrung against the more modern architectures, but if they admit the emperor has no clothes and pivot architectures they enter the market as mortals again... I think that conflict is why Intel has never pulled this off before.
 
  • Like
Reactions: kc9hzn
Power usage does matter on desktops as well as laptops. Power=heat, heat=energy costs on cooling and heat=wasted energy. Are you saying you’d rather run a 200W power supply 24/7 over a 100W power supply? Power usage most certainly matters, regardless of who you are or what hardware you’re using.
Yeah, performance really can only be measured per watt anymore. What keeps computers from being faster than they are? The ability to extract heat from the cores. It's what limits the number of cores per chip and the clock rate you can run those cores at and it's why we have all varieties of "boost" clock frequencies. If you want to go faster you have to figure out how to run cooler.
 
  • Like
Reactions: kc9hzn
That's expected. You can tool away at something for decades making little progress because when you look around you assume the rut everyone is in is fundamental-- until someone breaks out and shows there's a different way to think (pun intended).

These are all phenomenal design companies, once they know what the new target is I'd expect them to close the gap quickly. Intel is the interesting one to me-- they're in a no win situation. x86 is their reason for being-- if they keep insisting x86 is the future they're hamstrung against the more modern architectures, but if they admit the emperor has no clothes and pivot architectures they enter the market as mortals again... I think that conflict is why Intel has never pulled this off before.
I agree that Intel is the most interesting one for that very reason. I’m not familiar with the work AMD is doing these days (other than to know that Ryzen is outperforming Intel’s top of the line CPUs, kinda like back in the early 2000’s during the G3 and G4 era). Is AMD doing any work on non-x86 architectures or on high-performance+high-efficiency x86?
 
Question to nerds in here.
How many apps, operating systems can actually use multi cores ?
MacBook Air with M1 processor can do everything 95% of people want to do.
most of us don't need 32 core M3 processor with 64 GB RAM.
 
  • Haha
Reactions: NetMage
I agree that Intel is the most interesting one for that very reason. I’m not familiar with the work AMD is doing these days (other than to know that Ryzen is outperforming Intel’s top of the line CPUs, kinda like back in the early 2000’s during the G3 and G4 era). Is AMD doing any work on non-x86 architectures or on high-performance+high-efficiency x86?
I'm not sure I've seen anything officially announced as Qualcomm and Nvidia have, but there's rumors that they are, and AMD has proven themselves to be more flexible thinkers than Intel so I have to believe they're better positioned to pivot. They dragged Intel into 64bit land, I expect they'll be faster to recognize this as an opportunity as well. And while Intel is still trying to figure out how to make a competitive GPU core, AMD is already adapting their expertise there into an effort to compete with Nvidia in that space.
 
  • Like
Reactions: kc9hzn
Question to nerds in here.
How many apps, operating systems can actually use multi cores ?
MacBook Air with M1 processor can do everything 95% of people want to do.
most of us don't need 32 core M3 processor with 64 GB RAM.
A lot of stuff actually does. You don’t realize it, but the Finder and Windows Explorer both use multiple cores, as does just about every web browser out there these days. Basically, anywhere where you want to be able to do tasks without locking up the UI uses threads, and anytime you’re using threads, you benefit from having CPUs that can effectively parallelize tasks.
 
Question to nerds in here.
How many apps, operating systems can actually use multi cores ?
MacBook Air with M1 processor can do everything 95% of people want to do.
most of us don't need 32 core M3 processor with 64 GB RAM.
Can, do or should?

Any app can, but only the ones that would really benefit from more than 4 billion calculations a second really should, which is why few do.
 
A lot of stuff actually does. You don’t realize it, but the Finder and Windows Explorer both use multiple cores, as does just about every web browser out there these days. Basically, anywhere where you want to be able to do tasks without locking up the UI uses threads, and anytime you’re using threads, you benefit from having CPUs that can effectively parallelize tasks.
Ok, maybe that's a better answer... 😄

Still, most machines only keep a few cores loaded most of the time until the really heavy calculations kick in and bring in more computing power. Most apps you're using are spending most of their time waiting for you to click on something so they're idle. When you ingest a massive data file and process it, or run a game with a lot of simulation work happening, it'll demand more.

This is why I find Apple's approach so refreshing-- they give top single core performance that everyone benefits from to every device in a generation from iPhone to Mac Pro. Then they scale out from there for the more hardcore workloads.
 
Last edited:
This was NOT always the case:
G3, G4 and G5 and Intel were NOT designed by Apple! The interconnect or Southbridges were designed or co developed by Apple.

This means you're incorrect of Applle alwsy being vertically integrated. Sorry false.

Apple does NOT, I repeat, does NOT Make ANY M series chips! They designed them and collaborate on the design with TSMC.
Apple does not make iPhone, macBook, MacMini, MacPro
Apple does not make iPhone display, battery.
Qualcom doesn't make snapdragon processor.
NVDIA doesn't make their own graphics processor either.
Apple designs their products and works with contract manufacturers to manufacture their products using manufacturing processes, material and quality as per Apple requirements.
Samsung and Intel are the only chip design companies that have tjeir own fab, AMD, Apple, Qualcomm and every other chip design company use contract manufacturer like Samsung or TSMC to manufacture chips/processors/SOC.
 
  • Like
Reactions: DeepIn2U
I will NOT use Windows i'm sorry, and i can't imagine Linux getting very far with ARM soon either...they don't even have hardware-accelerated browser video playback sorted out yet...



So, as usual the issue will be device drivers for whatever computer they put the new chip into.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.