Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Intel is suffering from fear of cannibalization, just like any mature company. They can't go releasing something that's an order of magnitude more performant and cheaper, because it would demolish their financials.

That's why you need competition.
 
  • Like
Reactions: psychicist
Honestly, I’d rather have Intel compatibility at this point. I don’t really need more performance out of my system. In fact, I’m happy with pretty much everything my iMac does. I don’t sit down and think “man, I wish my processor was 2x as powerful.” I sit down and enjoy my speedy iMac and I’m happy.

The only reason I’d upgrade is to be able to update to the latest macOS, as I get obsoleted.

I get this. It makes practical sense.

However for laptop users we should also get increased battery life.
 
  • Like
Reactions: _Refurbished_
I think virtual desktops are the future
Thin clients. Citrix equals safe computing in the workplace and no virus's or computers down
storage and datacenter all move into the cloud

So ARM or Intel Macs will not make a difference
everything Citrix or virtual desktop. So macOS is in trouble in the corporate world of computing.
 
  • Like
Reactions: psychicist
Have you used iPad OS recently? The multitasking has so much improved lately. Tell me more about „complex“ applications. What do you mean with that?

As an OS, iPadOS is actually gimped since it has broken basic background multitasking compared to even a $10 Raspberry Pi Zero W running Raspian. For example, put an active SSH session in the background on iPadOS and it dies after about 30 seconds while on Raspbian it runs indefinitely.
 
What about a drastic power reduction? Cooler operation without fans (or smaller fans) meaning increased battery life?

Yep! Also, what about future opportunity to expand without waiting on Intel's 3-5 year cycle? While our current systems are more than capable of meeting today's use-cases with today's software, just imagine the possibilities of compute power were to double.. triple... or even increase by 5-fold! The entire game could change. We could see locally processes AI embedded in lots of software, interactive speech, improved UI's, etc.

IMO, getting off the Intel slow-walk cycle would be a great first step. Whether or not ARM is the right solution is a different question.
 
  • Like
Reactions: psychicist
There is nothing magical about the ARM instruction set that will make their 6 Watt chip keep up with an intel 150 Watt monster.

That's not the point. The ARM instruction set is simpler, more symmetrical, easier to decode, easier to compile to and does not have to deal with decades of legacy weirdness. Internally, modern superscalar x86-64 and ARM CPUs are not that different. But all other things being equal, ARM CPU will probably outperform an x86 CPU at the same power consumption, simply because the ARM CPU has less overhead imposed by the ISA.

It would be interesting to see how ARM and x86 bytecodes for identical algorithms compare interns of decoder pressure, generated binary size etc. I don't know if there are any good comparisons.
 
  • Like
Reactions: MikeZTM
That's not the point. The ARM instruction set is simpler, more symmetrical, easier to decode, easier to compile to and does not have to deal with decades of legacy weirdness. Internally, modern superscalar x86-64 and ARM CPUs are not that different. But all other things being equal, ARM CPU will probably outperform an x86 CPU at the same power consumption, simply because the ARM CPU has less overhead imposed by the ISA.

It would be interesting to see how ARM and x86 bytecodes for identical algorithms compare interns of decoder pressure, generated binary size etc. I don't know if there are any good comparisons.

It's hard to compare, because the byte code generated by the compiler is in a sense misleading - in x86-64, many (though not all) instructions are further decoded by the microcode circuit into a series of micro-ops, each of which is roughly comparable to a RISC ISA instruction.
 
  • Like
Reactions: CarlJ
Intel is suffering from fear of cannibalization, just like any mature company. They can't go releasing something that's an order of magnitude more performant and cheaper, because it would demolish their financials.

That's why you need competition.
Seems more like they're suffering from overly complex chip designs that don't scale well to smaller node sizes, and which they can't unwind to simplify without sacrificing performance... hence the (limited release of) Ice Lake CPUs performing worse than Whiskey Lake equivalents.
 
  • Like
Reactions: matram and CarlJ
Seems more like they're suffering from overly complex chip designs that don't scale well to smaller node sizes, and which they can't unwind to simplify without sacrificing performance... hence the (limited release of) Ice Lake CPUs performing worse than Whiskey Lake equivalents.

When I was at AMD, the general feeling was that Intel’s designers were not good, and the fabs saved their bacon. I think that’s panned out. Now that the fabs have let them down, here we are.
 
Err. Except the Snapdragon 865 almost certainly won't be a better chip than the A13.

We don't actually know, since it isn't actually shipping yet, but we do know that it's predecessor, the 855+, is 43% worse at single-core, and 21% worse at multi-core (despite having 33% more cores). IOW, Apple's chip is almost twice as fast. It's fairly unlikely that the 865 makes the 75% jump needed to catch up, much less be better than Apple's chip.
[automerge]1583748951[/automerge]

Irrelevant until A13 runs a proper OS and comparing real work loads instead of useless synthetic benchmarks. Visit www.phoronix.com to see how real pros do it.
 
It's hard to compare, because the byte code generated by the compiler is in a sense misleading - in x86-64, many (though not all) instructions are further decoded by the microcode circuit into a series of micro-ops, each of which is roughly comparable to a RISC ISA instruction.

Modern high-performance ARM CPUs also use micro-ops. But that is not what I was getting at. Comparing binary sizes might be interesting because it has effects on I-cache and fetch/decode performance. And of course, it's not just about the size but also the relative efficiency of execution (i.e. including proper profiling etc.). ISA effects are very complex and difficult to reason about. For example, at the most trivial level, ARM might need multiple operations to encode something that x86 can do in one or two. But then ARM has more registers and shorter instructions. Do these things average each other out in the end? Or is one ISA inherently more efficient here? My bet would be on ARM, but that is just speculation.
 
Modern high-performance ARM CPUs also use micro-ops. But that is not what I was getting at. Comparing binary sizes might be interesting because it has effects on I-cache and fetch/decode performance. And of course, it's not just about the size but also the relative efficiency of execution (i.e. including proper profiling etc.). ISA effects are very complex and difficult to reason about. For example, at the most trivial level, ARM might need multiple operations to encode something that x86 can do in one or two. But then ARM has more registers and shorter instructions. Do these things average each other out in the end? Or is one ISA inherently more efficient here? My bet would be on ARM, but that is just speculation.

Not sure one can draw a lot of conclusions about I-cache performance, since icaches will be sized (and buses will be sized) accordingly, to capture equivalent working sets in equivalent-performance devices. The decode performance is undoubtedly more complex in x86-64. It was a huge block on the x86-machines I worked on, much bigger than on the SPARC, MIPS, and PowerPC chips I worked on. It has a sequencer (essentially its own instruction pointer), a large microcode ROM, a set of registers for its own use, dependency-checking circuitry, etc. It was as big as the integer execution block, is my rough recollection.

One thing that does cause all sorts of I-cache performance problems is the fact that in real mode the instruction memory can be writeable. That complicates the icache coherency circuitry, even if you don’t actually use the feature in actual code.
 
The key thing is that the base OS already runs on ARM as well as the Intel architecture...it should be a simple matter to make an ARM version of macOS. Odds are Apple would not switch the Mac Pro over to this until last...they will start with one of the consumer laptops or the Mac Mini first, and then extend into the rest of the lines.

I run an OS in order to run programs not to run the OS per se. Some of those programs are Windows (in a VM) - the switch over to ARM would suck the performance out, as x86 - ARM translation is unlikely to be fast. The PowerPC to x86 transision was aided by the enormous performance leap that x86 offered at the time...
 
  • Like
Reactions: xyz01
I run an OS in order to run programs not to run the OS per se. Some of those programs are Windows (in a VM) - the switch over to ARM would suck the performance out, as x86 - ARM translation is unlikely to be fast. The PowerPC to x86 transision was aided by the enormous performance leap that x86 offered at the time...

If Apple uses x86-to-ARM transcoding (recompiling x86 binary code to ARM and run it natively), you probably won't notice much difference. There might be some initial latency and that's about it.
 
  • Like
Reactions: CarlJ
Intel x86 inside is a must for true compatibility with the rest (90% Windows) of the world. Otherwise, we will switch to Windows. A shame for all!
 
Intel x86 inside is a must for true compatibility with the rest (90% Windows) of the world. Otherwise, we will switch to Windows. A shame for all!

1) AMD invented x86-64, not Intel, so sounds like you want AMD for true compatibility
2) You are threatening Apple by saying "unless you continue to support windows, we will all switch to windows?" Makes a lot of sense. :)
 
So, someone somewhere has a prototype offering Intel performance using ARM, for half the power. So what? ... This article just confirms to me that ARM is not a serious contender in the desktop space, and has no hope of taking on Intel in the foreseeable future.
You're basing this on random public information about some other company's ARM products. Apple likely has chips in their labs that are substantially higher performance. Their mobile A-series chips do what they do because they're targeted at the phone/tablet space. Presumably Apple's smart enough to tell their chip designers, "okay, you're targeting for laptop/desktop now, these are your new limits for input power and heat dissipation in that environment, go nuts and see what kind of performance you can get us".

(What this article does, in particular, is to show that yes, it is indeed possible to get x86-level performance out of ARM chips - this may be a useful/new fact for those who steadfastly insist that ARM chips are only useful for phones and tablets. For those who already understand that ARM can scale up, the article is interesting, but not world-changing.)

Quite aside from the question of instruction sets, what Apple gets from an Intel-to-in-house-ARM switch is the ability to get what they want, when they want it, rather than waiting for Intel to get around to it. Currently, if they really want some arcane feature added to the CPU, all they can do is say, "Intel, pretty please?" and hope Intel deigns to listen to their request. If they switch to in-house chips, the answer to "can we have this feature" will always be yes (unless physics or upper management get in the way), and they can throw as much resources at it as they feel necessary. They can draw their own roadmap, rather than relying on Intel's constantly changing roadmap, with chips that fail to materialize, or arrive late, or in limited quantity, or underperform.
 
Amazon AWS is already onboard with their own Annapurna Labs 32-core ARM Neoverse N1 based server chip. Just need adoption from one more major player like Alphabet/Google. Need competition and cost at scale to trickle down to consumer level since even with competition from AMD vs Intel for server load x86-64 CPUs prices are still enterprise level. Would like to see 32-core ARM server system boards for a few hundred instead of few thousand.
 
  • Like
Reactions: psychicist
My 2011 MBP died in 2018 because of the motherboard problem (it had already been replaced once under the replacement program, but that ended). So I got zero - broken laptop, and zero worth, after 6-7 years.

If you still have it, very old replacement logic boards can often be found inexpensively on eBay. If you care.
 
Don’t do it. Intel is doing fine. We don’t need a chip race as part of differentiation.
 
BeOS... now thats a name I haven't heard of in a while...

I have come to learn that software is just as important as hardware, if Apple switches to ARM without the software the mac will be as useful as a screen door on a submarine. Software is what makes people still use Windows.
 
Seems more like they're suffering from overly complex chip designs that don't scale well to smaller node sizes, and which they can't unwind to simplify without sacrificing performance... hence the (limited release of) Ice Lake CPUs performing worse than Whiskey Lake equivalents.

The engineers build what the marketing people tell them to build.

You have to understand that Intel isn't a technology company, it's a bakery run by marketers. Instead of cookies they make chips. They excel at process improvement...because their numbers need/require a viable yield.

Current manufacturing require a tick, because the market has been conditioned into the old release cadence. They need something new; whether it's more performant or not is irrelevant.
 
  • Like
Reactions: psychicist
> 400 wats
F872079B-D04C-494C-8D88-365AC00A36DE.jpeg
 
  • Haha
Reactions: Enerccio
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.