Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yep. Never been a useful computer that wasn’t x86 in the history of the universe. Plus intel gives them away for free, so switching to arm will increase prices.

In bizarro world.

Relax, Inter-based MacOS isn't going anywhere. If Apple couldn't make this clear enough with the introduction of the Xeon-based supercomputer few weeks ago - I don't know what could.

And unlike Microsoft - Apple already has an ARM-based general computing platform - it's called iPadOS. This is why ARM-based MacOS isn;'t a thing, and will NEVER be a thing (all the wild rumor-mongering clickbait articles like this notwithstanding).
 
Relax, Inter-based MacOS isn't going anywhere. If Apple couldn't make this clear enough with the introduction of the Xeon-based supercomputer few weeks ago - I don't know what could.

And unlike Microsoft - Apple already has an ARM-based general computing platform - it's called iPadOS. This is why ARM-based MacOS isn;'t a thing, and will NEVER be a thing (all the wild rumor-mongering clickbait articles like this notwithstanding).
I think you missed the sarcasm in my post.
 
Are you aware of the reasons for the transition, specifically the performance per watt problems as well that dogged early 2000s PPC chips and roadmaps?

I do agree that Intel's technology at the time was in many ways inferior, but making the switch resolved the above issues and probably helped a great many switchers to dip a toe in the water of Mac OS by resolving long standing issues with Windows compatibility.

I should begin by stating I am not any kind of expert on CPU design etc, so I could be totally wrong here. It's been reported ARM chips are 30% more power efficient than Intel, and because of that server companies are looking at them. Power costs are their single highest expenditure.

The fundamental architecture of RISC uses 30% less power than CISC, due to x86's basic design. If that's true, the basic x86 architecture will need redesign, to keep up with ARM correct? I think what makes ARM attractive is how adaptable the architecture is. Apple designs what it wants, then adds it to the ARM foundation. ARM provides flexibility, much greater power efficiency, which translates to significantly better TDP compared to Intel i series chips now. I think it's real potential is because it's so highly customizable. This is just what I have read, I could be out in the weeds, but I kind of doubt it.
[doublepost=1561917604][/doublepost]
What a resume. He was at AMD when they caught up to Intel and released their 64-bit chips which were pretty amazing back then. He was at Intel when they transitioned from Pentium to Core with multi-core computing. He was at ARM when mobile chips rocketed to the moon in terms of yearly performance increases. Looks like success follows this man. This is great news for Apple.

I'm pretty sure one of their first ARM Macs will be a MacBook and I'm thinking about picking it up as my next Mac (secondary to my iMac) because I think that could be interesting.

Or even as a first edition collectors item. I can't find a downside to that purchase, and when it does get released, I'll buy one. Thanks!
 
In the end it’s been proven repeatedly that all else being equal, you can get identical performance from RISC and CISC and burn 20% less power in RISC, or you can get 20% more performance from RISC as compared to CISC at the same power budget. The structures are essentially identical, except for the addition of a much more complicated instruction decoder (with a micro-op sequencer, microcode ROMs, etc.) in CISC.
The above numbers may be correct. But a CPU does not only consist of a decoder. Current CPUs have millions of transistors. The decoder part gets lost in the total amount of transistors. Therefore the difference at the end is much smaller. There will be an advantage for ARM. But it will probably not be that huge.
ARM is not magic (as some think). The laws of physics also apply to ARM.

Already in 2005 the following was written on ArsTechica:
"The original Pentium spent about 30% of its transistors on hardware designed solely to decode the unwieldy x86 ISA. Those were transistors that competing RISC hardware like PowerPC and MIPS could spend on performance-enhancing cache and execution hardware. However, the amount of hardware that it took to decode the x86 ISA didn't grow that much over the ensuing years, while overall transistor budgets soared. On a modern processor, if x86 decode hardware takes up twice as many transistors as RISC decode hardware, then you're only talking about a difference of, say, 4% of the total die area vs. 2%. (I used to have the exact numbers for the amount of die area that x86 decode hardware uses on the Pentium 4, but I can't find them at the moment.) That's not a big enough difference to affect performance."
https://arstechnica.com/uncategorized/2005/11/5541-2/

And I say it again. The quote is from 2005. At that time CPUs had significantly fewer transistors than today and even then the difference was only small.

ARM may have a small advantage. But it should be small rather than huge. In the end it should be negligible, especially for desktop and server CPUs. You can build good CPUs with ARM. And you can build good CPUs with x86_64. You can also build garbage with both architectures.

By the way, Apple A SoC is also affected by Meltdown (AMD is not affected by Meltdown). I hope Apple didn't sacrifice security for performance.
 
Last edited:
Thing is, why wouldn't Apple, at the very least, be considering an ARM transition? Intel has dropped the ball again and again on offering 7nm chips and it's now looking like they won't have anything suitable until at 2021 at the earliest, based on their leaked roadmap.

Apple would be absolutely insane to not at least consider transitioning away from Intel.

Keep in mind that the poor thermal performance of the newer MacBooks is likely at least partially down to Apple developing them for lower TDP Intel chips that never materialised.
Oh they for sure are. Not IF, WHEN.
 
The above numbers may be correct. But a CPU does not only consist of a decoder. Current CPUs have millions of transistors. The decoder part gets lost in the total amount of transistors. Therefore the difference at the end is much smaller. There will be an advantage for ARM. But it will probably not be that huge.
ARM is not magic (as some think). The laws of physics also apply to ARM.

Already in 2005 the following was written on ArsTechica:
"The original Pentium spent about 30% of its transistors on hardware designed solely to decode the unwieldy x86 ISA. Those were transistors that competing RISC hardware like PowerPC and MIPS could spend on performance-enhancing cache and execution hardware. However, the amount of hardware that it took to decode the x86 ISA didn't grow that much over the ensuing years, while overall transistor budgets soared. On a modern processor, if x86 decode hardware takes up twice as many transistors as RISC decode hardware, then you're only talking about a difference of, say, 4% of the total die area vs. 2%. (I used to have the exact numbers for the amount of die area that x86 decode hardware uses on the Pentium 4, but I can't find them at the moment.) That's not a big enough difference to affect performance."
https://arstechnica.com/uncategorized/2005/11/5541-2/

And I say it again. The quote is from 2005. At that time CPUs had significantly fewer transistors than today and even then the difference was only small.

ARM may have a small advantage. But it should be small rather than huge. In the end it should be negligible, especially for desktop and server CPUs. You can build good CPUs with ARM. And you can build good CPUs with x86_64. You can also build garbage with both architectures.

By the way, Apple A SoC is also affected by Meltdown (AMD is not affected by Meltdown). I hope Apple didn't sacrifice security for performance.

I’ve designed risc (PowerPC, MIPS, SPARC). I’ve designed x86-64 (Opteron, Athlon 64, and even K6). I understand the difference. The point I was making is that x86 has no magical advantage over arm due to its ability to handle “complex” instructions - the person I was responding to made a bizarre claim that somehow CISC is better than RISC because it can handle “more complex” instructions.

As for your point, it’s not quite right. There’s more to it than just the number of transistors dedicated to decoding. There is logic all over the place to cope with x86 weirdness - it permeates the load store unit, the caches, the page tables, the translation lookaside buffer, etc. The register file size and the weird way it handles backward compatibility, etc. The fact that you can have macro-instructions that directly read from or write to memory causes IPC problems, etc. There’s a lot more to it that just the fact that you need microcode ROMs, a microcode sequencer, much more complicated instruction pointer, etc.
 
Considering something like the ARM Cortex-A53 incidentally found in another popular ARM device... the A53 is obviously much much slower, but how similar/different would we expect an Apple macOS desktop/laptop ARM device to be in terms of ISA, general architecture, etc? Or compared to current Intel desktop chips, would an ARM chip have similar structure in terms of cache and memory interface etc.? Would ARM require larger caches for same performance? And would we be expecting ARM chips across the full lineup, with anything from 8-64 high performance cores, or are we thinking lighter devices like an ARM based MBA to start with?
 
Just wanted to see who, if anyone, prominent in CPU architecture still worked at the Apple Silicon team.

This guy left well short of 3 years and now works at Microsoft. 😂

1694746101759.png
 
  • Like
Reactions: zubikov
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.