Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would tend to think that Apple's silicon team has a perf/watt curve graph that they are aiming for for each generation of compute cores. They are likely competing against themselves rather than AMD, Intel, Qualcomm, Samsung, or whatever company out there.

I wouldn't be surprise in the least if their next Mx SoCs' GPU performance may not beat AMD or nVidia's top end GPUs, but it probably would have hit their internal performance target for their intended Macs. I do not think they would want to brag about their SoCs after what they have learnt from their PPC -> Intel adventure.
 
But, by the way, none of this is because the CPU manufacturers hit a wall. This was ALWAYS the trade off. The issue was, back before, say, the mid 1990s, the number of transistors that could be squeezed on a die was too low to enable much in the way of multicore designs, so the only choice we had back then was higher frequency! So you sort of have it backwards.

Sort of fair but also not - there was almost no use to multiple cores in the computing tasks in that era of computing, nor was anyone prepared to design apps/compilers to make use of it to any real degree. Hell, they barely do now, 20-25 years later.
 
there was almost no use to multiple cores in the computing tasks in that era of computing, nor was anyone prepared to design apps/compilers to make use of it to any real degree.
Well, back in those days (late 90s, early 2000), multi socket computers were the only way (that I remember) to get more performance, usually for database servers. You have to pay an arm and a leg, and maybe a kidney for it.
 
Sort of fair but also not - there was almost no use to multiple cores in the computing tasks in that era of computing, nor was anyone prepared to design apps/compilers to make use of it to any real degree. Hell, they barely do now, 20-25 years later.

Chicken/egg. There was little software that supported multiple cores because there were few multi-core processors. Pretty much all you had were some multi-socket setups, for specialized uses (because they were so expensive, and not perfect for all uses due to penalties in process switching). The same problems that benefit from multiple cores today existed back then. I had one of the first multiple-core opterons sitting on my desk, and I used it to design the next multiple-core opteron. Certainly us CPU designers knew, from personal experience, that multiple cores would be great for certain problems. Not to mention that even if you are an average user doing average things, having 2 cores is a big deal because it keeps the machine responsive - that second core is always available to power the UI and accept your inputs. We did have to work with the OS vendors (and did our own work with Linux) to add support. But, again, chicken/egg.

In any event, we were thinking about multiple cores for at least 5 years before we decided to do it, and the hold up was always transistor budget - when 1 core takes 85% of the available die area, you aren’t in a position to add a second. You also have the yield problem - double the likelihood of failure. Of course you can sell the failed dual core chips as single core chips, but your cost to make them was higher, so the profit is lower. So you also had to be in a position where your ASP is high enough to justify all that, and you need to raise the cost of the multi-cores to compensate. As a result, in the early days, we knew we were really targeting only scientific/engineering payloads, where cost is no object (we were used to paying $15,0000+ for a “real“ computer)
 
Interesting times for sure.

As Apple is leaving the x86 world, showing there actually is a contender - the first time for more than a decade - we could be witness to some sort of new platform wars. Which are beginning just now.

My guess is that - following Apple‘s example - some manufacturers are in the process of coming up with Arm chips challenging the x86 world. Maybe Samsung, Microsoft, Qualcom or Nvidia (the Jetson AGX SoVs are close).

We‘ll see what Intel/AMD got up their sleeves to counter the Arm onslaught.

Nothing is decided yet of course. I‘m inclined to believe Arm/Apple are coming out on top though
 
My main point is that the principle benefit of the M1 is that everything is on the same bus/chip so that benefit is largely already “gotten”. Adding more cores doesn’t benefit very many workflow types or apps. And I haven’t heard much about the ability to scale frequency on these since so many items are on the same chip it may limit that.

You don't need to scale frequencies if you can scale core counts. People seem to miss the point that the scalability of Apple Silicon isn't only how many cores it can use, but in how wide the decoder pipeline can be. Unlike x86 and it's practical limit of 4 decoders (due to the complexities of decoding variable length instructions), the ARM architecture uses fixed length instructions. This means the decoders know exactly where to start decoding. This actually gives the M-series architecture a major leg up on x86, because while increasing core count on its own may not have a significant impact, the ability to scale up both core and decoder counts means that SoC can process significantly more IPC, and that's where the true performance comes from, not the frequency of the chip.
 
Last edited:
  • Like
Reactions: Maximara
My prediction M1 will end up like PowerPC and a u-turn will be made to AMD or Intel 12th gen. Check back in 2022.

So you're saying that Apple is going to abandon its own silicon within the same timeframe they said it would be completed in? Bad take there, especially given that lack of input and control over PPC development is what led Apple to switch to Intel back in the day and then switch to Apple Silicon last year. When you control the entire stack, you give yourself much more freedom to innovate both on the software and hardware sides of the equation. The M1 is just the first version of Apple Silicon, and future iterations promise to be even more impressive.
 
My prediction M1 will end up like PowerPC and a u-turn will be made to AMD or Intel 12th gen. Check back in 2022.
This makes no sense. The wattage for whatever AMD or Intel 12th gen will be will be insane (I still remember the jokes about the Pentium space heater when it first came out) and odds are they will be expensive. And this is ignoring any production issues that may rear their ugly head.

Never mind the future, like it or not, belongs to ARM as that is what dominates the mobile space and it is only a matter of time before that comes to non Nintendo consoles and eventually the PC. The only thing really holding ARM back was emulators and RAM. Apple shows that using translation rather then emulation is the way to go and RAM is insanely cheap
 
This makes no sense. The wattage for whatever AMD or Intel 12th gen will be will be insane (I still remember the jokes about the Pentium space heater when it first came out) and odds are they will be expensive. And this is ignoring any production issues that may rear their ugly head.

You make no sense. Are you conveniently forgetting that Macbook Pro M1 has a fan that runs constantly? Fan for AMD mobile Ryzen 4650U doesn't spin unless under full load and it's only 7nm so might not even need a fan once it goes 5nm. Intel is also a non-issue once they decide to outsource to TSMC/Samsung 5nm/7nm. M1 will be relegated to competing with $300 ARM Chromebooks.
 
  • Haha
Reactions: 09872738
You make no sense. Are you conveniently forgetting that Macbook Pro M1 has a fan that runs constantly? Fan for AMD mobile Ryzen 4650U doesn't spin unless under full load and it's only 7nm so might not even need a fan once it goes 5nm. Intel is also a non-issue once they decide to outsource to TSMC/Samsung 5nm/7nm. M1 will be relegated to competing with $300 ARM Chromebooks.
MBA doesn‘t even have a fan, yet it barely throttles.

We will see, let‘s discuss this in two years time. My guess is you‘ll be proven oh so wrong by then
 
  • Like
Reactions: jdb8167
This makes no sense. The wattage for whatever AMD or Intel 12th gen will be will be insane (I still remember the jokes about the Pentium space heater when it first came out) and odds are they will be expensive. And this is ignoring any production issues that may rear their ugly head.

Never mind the future, like it or not, belongs to ARM as that is what dominates the mobile space and it is only a matter of time before that comes to non Nintendo consoles and eventually the PC. The only thing really holding ARM back was emulators and RAM. Apple shows that using translation rather then emulation is the way to go and RAM is insanely cheap
Couple of things. Can’t predict tdp for 12th gen intel chips. I guess based on today’s trajectory one could assume high heat, but one can’t cite that as a fact. It’s clear ARM of some type dominates the mobile cell phone space. But in a few years we’ll see where Intel and AMD are.

RAM is NOT insanely cheap. Cheap ram might be insanely cheap, but good ram isn't.
 
Last edited:
MBA doesn‘t even have a fan, yet it barely throttles.

We will see, let‘s discuss this in two years time. My guess is you‘ll be proven oh so wrong by then

Sure, that's why people have to mod their MBA M1 with thermal pad so it doesn't throttle and even then it performs worse than MBP M1 with fan.

 
  • Disagree
Reactions: jdb8167
You make no sense. Are you conveniently forgetting that Macbook Pro M1 has a fan that runs constantly?

As opposed to laptops that turn their fans down to zero? That’s rare.

Fan for AMD mobile Ryzen 4650U doesn't spin unless under full load and it's only 7nm

The 4650U not only takes more heat than the M1, but is also just over half as fast. Not sure why you’d bring it up. A Tiger Lake or even just Ice Lake chip would be far better.
 
Sure, that's why people have to mod their MBA M1 with thermal pad so it doesn't throttle and even then it performs worse than MBP M1 with fan.

Right. Throttling decreases the MBAs performance very lttle, about 10-15 %. When pushed to the max. For the vast majority of users it doesn‘t throttle at all.
Compare that to Intel machines. Those are worse than M1 by orders of magnitude. Its almost comical you try to compare Intel machines with M1s; Intels are so far behind these days one cannot really compare them with a straight face
 
Sure, that's why people have to mod their MBA M1 with thermal pad so it doesn't throttle and even then it performs worse than MBP M1 with fan.


Yes, of course there’s thermal throttling, and of course there’s less thermal throttling with active cooling. But the CPU is faster than what either AMD or Intel have to offer, so who cares?

Nobody “has to” mod their MBA.
 
  • Like
Reactions: 09872738
The 4650U not only takes more heat than the M1, but is also just over half as fast. Not sure why you’d bring it up. A Tiger Lake or even just Ice Lake chip would be far better.

I have both here. M1 is almost half the performance at 24,343 MIPS while giving up compatibility vs 41,023 MIPS and that's against the older Ryzen 4650U and not latest Ryzen 5600U.

7zip Ryzen 4650U.png

7zip MBA M1.png
 
You make no sense. Are you conveniently forgetting that Macbook Pro M1 has a fan that runs constantly? Fan for AMD mobile Ryzen 4650U doesn't spin unless under full load and it's only 7nm so might not even need a fan once it goes 5nm. Intel is also a non-issue once they decide to outsource to TSMC/Samsung 5nm/7nm. M1 will be relegated to competing with $300 ARM Chromebooks.

My MacBook Pro with the M1 has a fan that either never spins at all or just doesn't spin loud enough to hear it. I've never heard people claim the M1 Pro fans run constantly. You seem to be spreading unfounded and outright mistruths regarding the M1 compared to the Ryzen 4650U (whose fan does run constantly and louder then the M1).
 
  • Like
Reactions: 09872738
I have both here. M1 is almost half the performance at 24,343 MIPS while giving up compatibility vs 41,023 MIPS and that's against the older Ryzen 4650U and not latest Ryzen 5600U.

View attachment 1736285
View attachment 1736286
Ah yes, I should've made a "unless you're running one of the CPUs in emulation and then pick one specific microbenchmark" disclaimer. Silly me.

The 4650U does have six full-performance cores, so it does have an edge in the few applications that you can parallelize like that, yes.
 
  • Like
Reactions: jdb8167
Ah yes, I should've made a "unless you're running one of the CPUs in emulation and then pick one specific microbenchmark" disclaimer. Silly me.

The 4650U does have six full-performance cores, so it does have an edge in the few applications that you can parallelize like that, yes.

That's just reality of M1. You give up compatibility with M1 and without native apps performance is even slower at nearly half of AMD. That's why my MBA M1 is used pretty much as an overpriced Chromebook.
 
  • Disagree
Reactions: 09872738
That's just reality of M1. You give up compatibility with M1 and without native apps performance is even slower at nearly half of AMD. That's why my MBA M1 is used pretty much as an overpriced Chromebook.
The “reality of M1” is that you have to run 7-Zip through Wine+QEMU when you could’ve run the command-line tools natively instead? Really?
 
The “reality of M1” is that you have to run 7-Zip through Wine+QEMU when you could’ve run the command-line tools natively instead? Really?

Yeah, once again, the forum could really use an eye roll emoji for reactions. I mean, mi7chy, this "benchmark" isn't even good for trolling. It's so bad and obvious even Intel didn't do it!
 
Last edited:
Yeah, once again, the forum could really use an eye roll emoji for reactions. I mean, mi7chy, this "benchmark" isn't even good for trolling. It's so bad and obvious even Intel didn't do it!

What’s funny is that when we are designing processors, we don’t use *any* of these benchmarks when making our design decisions. I always find it amusing how people obsess about these things.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.