Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They've been beating the ARM world for years. Android tech has always had "better" tech on paper. More cores. More GHz per core. More RAM. And always lost.
Same with AMD until recently. Their older processors had super high clock speeds but were still slower than their Intel counterparts. Then they had those Bulldozer CPUs with 8 cores, which was a lot at the time. Finally with Ryzen they actually became competitive for a sustained period of time, and I didn't notice until the market already had, since I'd seen too much false hype in the past.
 
Last edited:
Thanks for the explanation and sorry if I wasn't more specific.
My question was regarding how ARM handles backwards compatibility with previous versions of the ARM architecture and instruction set.

One of the limitations regulary cited for Intel is the baggage of the x86 design and that even if they invested tons of money in new designs they would still never be able to match ARM in the long run because they need to keep compatibility with existing software.
They tried creating the Itanium and did a terrible job in emulating existing x86, probably because they had to do it real-time, unlike Rosetta 2 which can do ahead of time translation because Apple also controls the operating system.

I was curious if on the ARM side things are more flexible and have evolved over time, or it also has to carry some baggage back from when it was first invented in 1985. For example the M1 only needs binary compatibility with 64bit iOS apps dating from 2013.

A lot less baggage. ARM releases new versions of the instruction set over time, and APple can (and has) chosen what it wants to support. Apple has also added, essentially, their own instructions for multimedia calculations, etc. And as long as they believe they will have no issue with compilers and such (because they do it themselves or they think that the community will support whatever they do), they can even drop instructions they don’t care about, Replace them with others, etc.
 
All intel needs to do is ship those better chips.
What better chips?

Intel have nothing. They're putting out marketing slides only - they have nothing that competes with this, and they're STILL not shipping 10nm server parts for example despite promising them since 2016. Ice lake SP has been delayed until 2021.

They haven't even gotten 10nm working properly yet in volume with large dies, and you guys think they have an ace up their sleeve with 7nm? Fact is they're struggling to make 4 core 10nm parts with a GPU that run at slower clocks than their old stuff. Never mind competing with AMD or Apple at this point.

🤣

If intel had something to release, they'd be shipping it. They don't so they aren't.

What they say they might ship in 2-3 years from now (which is how long it will take them to get their own 7nm process out, minimum - as per their own marketing on their 7nm road-map which says 7nm is due in 2022, which based on 10nm and 14nm is extremely optimistic) is not relevant to comparison to M1.
 
Last edited:
However I can tell you that there are many instances where current Macs are used to run a non-Mac OS.
I am sure you can quote anectada like this for weeks. And I don't disagree.

But companies like Apple make decisions based on actual research / market analysis, and I bet they simply don't care if they lose the few thousand users like you and yours (who really need Windows). The future benefits far outweigh the handful of lost customers.

Especially Apple has a history of not flinching when it comes to that - they were the ones to get rid of the floppy, the optical drive, removable battery/RAM, audio jack etc, narrowing down their target market to the one they want to keep.

Getting rid of 32 bit apps first, then of Intel and Windows altogether are just another moves in the same category - forward, but leaving a small percentage of old clients behind - in order to build something that will attract a different (and hopefully bigger) group.

I am happy they don't stick to Intel just to please the 5% using Windows.

On much smaller scale I have made such calls in my own business - cutting off legacy / otherwise "unwelcome" clients is a normal practice if you need to move forward. Just this year we decided to "encourage" over 2000 subscriptions (roughly 25% of clients) to switch to new plans or stop being our clients altogether - just because our business has new priorities.
 
Last edited:
I think this plus the performance improvement may hopefully be a kick in the pants to get people to consider if they actually really need Windows - on their Mac.

Because I'll bet that many (including myself) can get away from it. I pretty much did. I have a desktop I can RDP to if I really need it, but 95+ percent of the time I do not need Windows at all.

Much better performance and no fan noise all the time > having to RDP to a windows box those few times I need it.
 
When my father had a Apple VAR we asked at a meeting with Apple engineers why does Apple out perform IBM and Intel with specs that should not be possible. The man said “ We look at the structure of the entire machine. Not just the cpu. We pick the most capable of all available components and engineer everything to work together at 100%. Many of our competitors focus on one aspect, cpu power. It’s like putting a motor from a Lamborghini in a Yugo. It has to have the best of everything.”
 
You know that "VirtualApple" result is not anymore on Geekbench website, right?

Was it fake or unintentional mistake IDK, but in any case the test result and "news" are not valid any more
It’s still there.


its not on the full list bc we mocked that up to show people where it ranked
 
  • Like
Reactions: 2Stepfan
What better chips?

Intel have nothing. They're putting out marketing slides only - they have nothing that competes with this, and they're STILL not shipping 10nm server parts for example despite promising them since 2016. Ice lake SP has been delayed until 2021.

They haven't even gotten 10nm working properly yet in volume with large dies, and you guys think they have an ace up their sleeve with 7nm? Fact is they're struggling to make 4 core 10nm parts with a GPU that run at slower clocks than their old stuff. Never mind competing with AMD or Apple at this point.

🤣

If intel had something to release, they'd be shipping it. They don't so they aren't.

What they say they might ship in 2-3 years from now (which is how long it will take them to get their own 7nm process out, minimum - as per their own marketing on their 7nm road-map which says 7nm is due in 2022, which based on 10nm and 14nm is extremely optimistic) is not relevant to comparison to M1.
I think that’s why I said all intel needs to do is to “ship” those chips. ;) But they have not been able to.
We already see how many times intel failed to meet their own roadmap. There’s a reason Apple made this switch.
 
This top spot wasn't real folks. Why? Because Geekbench put back the top spot a day later. And ironically M1 is now on page 3. Weirdly, the top spot is an AMD Ryzen 5800X in an iMacPro1,1.


Now if we were to throw up the Threadripper scores yet released they'd be even higher in Single Core and their Multicore would be four-fold or more larger.

Screen Shot 2020-11-18 at 8.55.56 AM.png
 
Massive reorder buffer: UltraSparc V had that. I know, because I was the original designer of the reorder unit on that chip.

”width”: what width are you referring to? There is nothing unusual about the execution width. It’s, in fact, identical to that used in, say Athlon-64 and Opteron. (I know, because I owned the integer execution unit for the first of those designs)

Lolz to all the people arguing about CPU design and implementation with someone who is obviously a very experienced CPU/hardware engineer.
 
This might inadvertently hurt Mac sales in the short term. If the reviews are too glowing of AS then far more people are going to wait for AS iMacs and higher end products.

It's called the Osborne Effect.

The Osborne effect is a social phenomenon of customers canceling or deferring orders for the current soon-to-be-obsolete product as an unexpected drawback of a company's announcing a future product prematurely.

The term was coined after the Osborne Computer Corporation, a company that took more than a year to make its next product available, and eventually went bankrupt in 1983
 
It's called the Osborne Effect.
Indeed it is, I remember it well. I had an Osborne 1 for quite a while, lugging it around the tube in London, and then an Osborne 4 which was a fabulous piece of kit I loved using and missed the most when I left that job :)
 
  • Like
Reactions: SlCKB0Y
I am sure you can quote anectada like this for weeks. And I don't disagree.

But companies like Apple make decisions based on actual research / market analysis, and I bet they simply don't care if they lose the few thousand users like you and yours (who really need Windows). The future benefits far outweigh the handful of lost customers.

Especially Apple has a history of not flinching when it comes to that - they were the ones to get rid of the floppy, the optical drive, removable battery/RAM, audio jack etc, narrowing down their target market to the one they want to keep.

Getting rid of 32 bit apps first, then of Intel and Windows altogether are just another moves in the same category - forward, but leaving a small percentage of old clients behind - in order to build something that will attract a different (and hopefully bigger) group.

I am happy they don't stick to Intel just to please the 5% using Windows.

On much smaller scale I have made such calls in my own business - cutting off legacy / otherwise "unwelcome" clients is a normal practice if you need to move forward. Just this year we decided to "encourage" over 2000 subscriptions (roughly 25% of clients) to switch to new plans or stop being our clients altogether - just because our business has new priorities.
Really soundly reasoned reply! Innovation can't happen without leaving some of the old ways behind. Much like the (apocryphal) story of Henry Ford saying "if I'd asked people what they wanted, they would have said 'faster horses'".
 
Isn't this closing Apples ecosystem even more? Developers are now more challenged to write specific instructions for Apple hardware than ever before.
Not really, no. From a software developers point of view a M1 Mac is 98% like a Intel Mac. A Intel Mac is about 20% like a Intel Windows computer. Sure the Intel Mac and Intel Windows computers share instruction sets, but modern programmers spend very little time (if any!) writing assembly. Frequently if debugging forces you to look at assembly you find another path (recompile with debug symbols, add logging...whatever).

Sure that isn't everybody, if you are writing a game engine you might resort to assembly here and there, but long before that you have already committed whole heartedly to Metal or whatever Apple's GPU layer is the year you are working on a graphics engine. That is already committing that effort entirely to Apple's platform. Far more then any use of ARM assembly.

Talk to anyone that writes Mac apps and has ported an app to the M1. Unless they are VMWare or Parallels they likely spent more time dealing with macOS 11 changes then Intel to M1 changes.

In fact the ARM part will help you out if you ever work on Android, or if Windows for ARM manages to be the time Microsoft actually doesn't give up and return to Intel. I mean not a lot, because you don't really do much direct assembly there either. Nor on the Raspberry Pi.

That isn't 100% of the story (it doesn't cover people that write compilers for a living, or for a hobby, nor debuggers or disassemblers...but that is like 0.0001% of the population of programmers), but it is close enough.
 
It's not really that hard. All modern CPUs do out of order execution. If you're already executing stuff in the "wrong" order, then you can do it concurrently.
FYI "out of order execution" on a modern CPU doesn't really mean what you think it does. It means the CPU can execute parts of the instructions stream that appear later before ones that appear earlier, but the effects from a single thread still appear to be in order. For most CPUs it is also defined that the effects across multiple CPUs also appear to be in order.

The "appear to be in order" frequently discounts effects like cache traffic and TLB traffic, but does include things like interrupts, exceptions (normally, some CPUs define imprecise exceptions to make this easier), memory stores, and sometimes memory loads.

So for example if you have a load, some math on the value loaded, and some math on values loaded before and then some math on combining both sets and a store all of that math can happen in whatever order the CPU finds most beneficial. The store will only happen after the load, and it will only happen after all the math is determined not to throw an exception.

Out of order execution mostly helps you manage to get some work done after a cache miss. The closely related speculative execution is more of the same, and lets you execute across conditional branches before the result of the condition is known. Sometimes across both directions for the branch and whatever turns out to be not true is quashed.

This is all the CPU working extremely hard because the sequence of instructions it is executing absolutely can not be just done concurrently. They have a ton of implicit dependencies, and detangling them and deciding if anything can be executed right now is a big win.
 
Not really, no. From a software developers point of view a M1 Mac is 98% like a Intel Mac. A Intel Mac is about 20% like a Intel Windows computer. Sure the Intel Mac and Intel Windows computers share instruction sets, but modern programmers spend very little time (if any!) writing assembly. Frequently if debugging forces you to look at assembly you find another path (recompile with debug symbols, add logging...whatever).

Sure that isn't everybody, if you are writing a game engine you might resort to assembly here and there, but long before that you have already committed whole heartedly to Metal or whatever Apple's GPU layer is the year you are working on a graphics engine. That is already committing that effort entirely to Apple's platform. Far more then any use of ARM assembly.

Talk to anyone that writes Mac apps and has ported an app to the M1. Unless they are VMWare or Parallels they likely spent more time dealing with macOS 11 changes then Intel to M1 changes.

In fact the ARM part will help you out if you ever work on Android, or if Windows for ARM manages to be the time Microsoft actually doesn't give up and return to Intel. I mean not a lot, because you don't really do much direct assembly there either. Nor on the Raspberry Pi.

That isn't 100% of the story (it doesn't cover people that write compilers for a living, or for a hobby, nor debuggers or disassemblers...but that is like 0.0001% of the population of programmers), but it is close enough.
^^^^^^^^^^ This!

As a devleoper myself I can confirm, there is more to look at from Big Sur than the lower-level architecture of ARM.
 
  • Like
Reactions: J Osborne
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.