Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My observation of this thread as a whole is that everyone is fixated on the "nm's", because lower is better...but I'm also noticing that the same people aren't engineers.

What is missing from this discussion is that the "process" is about the wave length of the laser used to expose the wafers.

Sure, they can fit more into a chip with higher wavelength, but is it always beneficial to make the circuitry as small as possible? I don't think so. I know there are issues that pop up when traces get too small.

And with AMD, just because they advertise that they are using a process with higher resolution, does that mean they are actually making designs that push the limits? My guess is no....a 1080p tv can play 720p content just fine...

I would like to hear an engineers take, but I anticipate there are a complex set of factors that lead to the choices being made in these chips. I bet the NMs aren't the biggest factor.
Feature size is not the only way to make a better process.
 
  • Like
Reactions: Adult80HD
A sprinter is a very different runner than a marathon runner!

The A series chip is still only a sprinter, it just can't run the marathon the Intel or AMD chips can do.

i fail to see how this dichotomy is anyway analogous to the discrepancy in Apple’s A-series chip performance with Intel’s. Apple has yet to release a chip which is in the same thermal and power consumption class as Intel’s processors, which is precisely what makes the A-series’ performance so impressive. There is no reason why Apple’s success in chip design can’t scale, as long as they dedicate the proper resources
 
  • Like
Reactions: Roscorito
why our main desktop/laptop machines are still using Intel CPU's.
Because of history/compatibility. Intel tried making a clean break, making 64 bit hardware with no backwards compatibility (Itanium) annnnnnnd, nothing. It’s not because Intel CPU‘s have some kind of “magic” that makes them the only processors fit for desktop/laptops. It’s because, for the code that’s been written, if you make hardware that can’t run wide varieties of x86 compiled code, you’re not selling your hardware.

Any ARMacOS system will have a day one library created by every macOS developer that wants to sell software on this new platform AND that uses Xcode. And we’re not talking emulation, like Microsoft is doing, we’re talking native code.
 
You're validating my point; Intel CPUs works when they're in the "proper form factors"; you can't compare both and say that ARM sucks because they can't sustain well. If Apple designed their A* CPUs to match the laptop form factors, they'll have a much better and higher sustained performance. Right now, they're not, they're designed for short bursts for battery life in a no-cooling form factor, so you can't compare both. There's Intel Y series with dual core vs. Apple's A12X that could be compared but I don't recall see anything like this, Y series kinda sucks.

iMac has the same issues, they're not just more noticeable because they have larger form factor; they've improved the cooling in iMac Pro.

Sorry you made a jump that doesn't hold here. While from the surface you have a point a air cool chip is a lot different than an actively cooled chip. But its more than just that! At the core of it is the difference between a CISC (intel) and a RISC (Arm) chip design as well as the weight of the OS. iOS is a lightweight OS unlike macOS. When you look at both of these factors you see the RISC design if fully developed to be equal to CISC won't really offer much difference in performance.

I use and like both Mac's and iPad's! But I don't expect to get a A series APU in a Mac as I really think it would be silly! The cost of buying apps that are coded for iOS and using any thunker service is likewise foolish.

I also don't think I'll see a brief case flying car like George Jetson had too:cool: As much as I'd love one.
 
I'm so glad I don't really know much about this stuff. My computer works for what I use it for. That's all that really matters to me.
 
  • Like
Reactions: simonmet
Sorry you made a jump that doesn't hold here. While from the surface you have a point a air cool chip is a lot different than an actively cooled chip. But its more than just that! At the core of it is the difference between a CISC (intel) and a RISC (Arm) chip design as well as the weight of the OS. iOS is a lightweight OS unlike macOS. When you look at both of these factors you see the RISC design if fully developed to be equal to CISC won't really offer much difference in performance.

I use and like both Mac's and iPad's! But I don't expect to get a A series APU in a Mac as I really think it would be silly! The cost of buying apps that are coded for iOS and using any thunker service is likewise foolish.

I also don't think I'll see a brief case flying car like George Jetson had too:cool: As much as I'd love one.

CISC and RISC fight was over 30 years ago with the release of Pentium MMX which is a RISC style micro code CPU with a x86 CISC decoder.

Modern Intel chip are not CISC nor RISC but instead a VLIW backend with a CISC decoder.
CISC has no logical advantage compare to RISC, let along performance.
iOS is running same kernel as macOS.
ARM chips are not strictly RISC anymore long times ago.

CISC/RISC is not important for more than 2 decades. Complex in CISC does not means calculus in 1 instruction. It just means some instruction can be fused into 1 for easy access. FMA for example doing a+b*c in one instruction and ARM already support this kind of instructions.

iOS is nowhere lightweight compare to macOS. It runs everything macOS could run. After the 64bit transition all iOS app are source code compatible with macOS and vice versa.



Extra knowledge: CISC was well know for its memory saving--same assembly code in CISC is much shorter than the RISC version thus reduced instruction memory usage which was a huge amount in 1980s.

You probably thinking less instruction runs faster but that's not the case here as one RISC instruction runs much faster than on CISC complex instruction so the performance ends up the same.

Today our memory are fulled with pictures and media instead of CPU instructions.
 
Last edited:
  • Like
Reactions: PickUrPoison
Praying for the opposite. Anyone who’s held a lightweight 2018 iPad Pro in their hand as it coolly, fanlessly renders out 4K video files in seconds knows what’s coming with ARM. Curious to see how they handle software but can’t wait.
When arm chips are eventually able to do everything intel chips can do, you will end up with an arm chip that is essentially indistinguishable from intel chip. They’re just coming at it from different angles. It’s the same situation with macos and ios.
 
  • Like
Reactions: ssgbryan
Praying for the opposite. Anyone who’s held a lightweight 2018 iPad Pro in their hand as it coolly, fanlessly renders out 4K video files in seconds knows what’s coming with ARM. Curious to see how they handle software but can’t wait.
My 12.9 inch iPad Pro bent with normal use, so....not happy about that thinness and lightness trade off.
 
  • Like
Reactions: simonmet
They were comparing N1 32 core to 32 cores zen1 EPYC.
So obviously not faster than current Zen 2 64 core EPYC (half core count, lower per core perf).

But that 32 core CPU is only 105W.

It just shows as a proof that ARM server chips can outperform Xeon/EPYC on both single core performance and multi core performance.
ARM is way weaker in SIMD.

Even AMD is still weaker than Intel here.
 
Apple was foolish not having a plan B just in case.
What we saw WAS the plan B. :) Plan A was shipping what they planned to ship, plan b would have been, rather than ship nothing, to ship what they did.

It wasn’t an Apple only thing, either, everyone across the board was impacted by the same issues in various ways. Because, no one’s going to pay to produce mass quantities of two different cases, then use 1 and recycle the other. That’s inventory hell!
 
ARM is way weaker in SIMD.

Even AMD is still weaker than Intel here.

If you comparing core to core SIMD performance then yes Intel with AVX512 is faster.

But if you comparing watt to watt SIMD performance then Intel is nowhere faster as their CPU is twice as power hungry as AMD EPYC. Their 28core Xeon Platinum is way slower than a 64 core EPYC that runs on less power.

ARM goes the same way. It lose per core to AVX512 but could pack in much more cores to counter that.

SIMD usage is almost always heavily multithreaded so single core performance isn't useful here.

BTW: Intel AVX512 have a heavy penalty on CPU frequency. They throttled to 1.x GHz while running full load of AVX512 SIMD instructions. AVX512 is 10x faster on paper only.
 
Last edited:
If you comparing core to core SIMD performance then yes Intel with AVX512 is faster.

But if you comparing watt to watt SIMD performance then Intel is nowhere faster as their CPU is twice as power hungry as AMD EPYC. Their 28core Xeon Platinum is way slower than a 64 core EPYC that runs on less power.

ARM goes the same way. It lose per core to AVX512 but could pack in much more cores to counter that.

SIMD is heavily multithreaded so single core performance isn't useful here.
SIMD is parallel, not multithreaded. Within a thread.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.