Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Research and you will see how many companies went out of business trying to compete with Intel making processors.
Oh, remember Apple was one of those failed attempts with AIM and PowerPC.
.

PowerPC exists to this day. Apple did not go out of business. Exponential and Motorola did, i suppose (though parts of exponential are now part of apple’s design team, and Motorola lives on as Freescale).

Many companies have survived for years making processors, though. AMD is still around. Oracle. ARM. Qualcomm. IBM. Etc.
 
Research and you will see how many companies went out of business trying to compete with Intel making processors.

Yeah. A lot of mobile device companies went out of business trying to compete with Palm, Nokia, and RIM.

And Power still beats x86 at the very high end (water cooled IBM mainframes).
 
after seeing the a11 bionic chip performance its not crazy to think this. Apple is winding up and getting ready to obliterate intel. Think about it....The Apple TV 4K has the iPad Pro A10X in it. The scores 9480's with geekbench and only consumes 6 watts (source: https://images.apple.com/th/environment/pdf/products/appletv/Apple_TV_4K_PER_sept2017.pdf) If apple sandwiched 3 of those chips together they could match intels latest 8700K chip and it would only consume 18 watts! vs intels 95w tdp. if they used the A11 chip they could blow past intels latest offering...i know its not that simple and geekbench isnt the best benchmark to go by. but its definitely a possibility

Seriously?

People still using geek bench to compare a completely different platform?

Why so people talk like they understand with No engineering background to support them?

The click bait news recently seriously mislead the mass who doesn’t understand.
 
  • Like
Reactions: jamesrick80
Seriously?

People still using geek bench to compare a completely different platform?

Why so people talk like they understand with No engineering background to support them?

The click bait news recently seriously mislead the mass who doesn’t understand.

Seriously you need to relax. All i was saying was if apple could scale the architecture it could be competitive...
 
Linux (and even a version of Windows) runs just fine on ARM-64 chips. So does TensorFlow. And the GPGPU and ML acceleration cores that Apple now builds into its SOCs may well out-perform any Intel general purpose processor (unless Intel does the same).

I actually have Linux running on a few ARM chips. And I have it running on an Intel Xeon too. The Xeon's clock is about double the ARM's but the speed ratio is much more than the clock ratio. It is easy to see why. Intel used all kinds are architectural features and a lot of L1/2/3 level RAM cache. Apple could implement all of this on a future 16-core ARM chip but then it would be as big, as power hungry and as expensive as the Xeon. The xeon is dramatically faster even after you account for the different clock rate and number of cores. But the ARMs have their place. I'm using them on a mobile battery powered platform. (I'm also using ARM M processors too)

Today if your project needs to run Tensorflow. the only GPU that matters is nVidia. It does not matter how fast the GPU is. if it is not a recent vintage nVidia it is of no use. Also I doubt Apple will ever put anything like dual GTX-1080 in anything they sell. so those looking for a high-end Tensorflow capable machine will always be on Linux/Xeon.

I think Apple's reason for ARM is to save power, space and cost. And they are thinking ARM is fast enough for most consumer web browsing and media playback. They are correct, Soon the ARM will be fast enough for non-demanding users.

What I'm thinking is that Apple is going to abandon the part of the market that needs high end computing. Or continue what they are doing now, building very high priced, low volume solutions like the $5,000 iMac Pro.

I wish I had Apple's data, the stuff they get from those users who enable streaming feedback to Apple. I'd be interested to know what percent of users use even the consumer level content generation software like Garage Band, iMaove or even Numbers and Keynote.

I'd bet anyone who clicks "File->Save As" in any app is a minority. People reading this are apple-geeks and not typical users.
 
I actually have Linux running on a few ARM chips. And I have it running on an Intel Xeon too. The Xeon's clock is about double the ARM's but the speed ratio is much more than the clock ratio. It is easy to see why. Intel used all kinds are architectural features and a lot of L1/2/3 level RAM cache. Apple could implement all of this on a future 16-core ARM chip but then it would be as big, as power hungry and as expensive as the Xeon. The xeon is dramatically faster even after you account for the different clock rate and number of cores. But the ARMs have their place. I'm using them on a mobile battery powered platform. (I'm also using ARM M processors too)

Today if your project needs to run Tensorflow. the only GPU that matters is nVidia. It does not matter how fast the GPU is. if it is not a recent vintage nVidia it is of no use. Also I doubt Apple will ever put anything like dual GTX-1080 in anything they sell. so those looking for a high-end Tensorflow capable machine will always be on Linux/Xeon.

I think Apple's reason for ARM is to save power, space and cost. And they are thinking ARM is fast enough for most consumer web browsing and media playback. They are correct, Soon the ARM will be fast enough for non-demanding users.

What I'm thinking is that Apple is going to abandon the part of the market that needs high end computing. Or continue what they are doing now, building very high priced, low volume solutions like the $5,000 iMac Pro.

I wish I had Apple's data, the stuff they get from those users who enable streaming feedback to Apple. I'd be interested to know what percent of users use even the consumer level content generation software like Garage Band, iMaove or even Numbers and Keynote.

I'd bet anyone who clicks "File->Save As" in any app is a minority. People reading this are apple-geeks and not typical users.

An ARM with equivalent “architectural features” will never be as big or power hungry or expensive as x86-64. I’ve designed cpus based on RISC (PowerPC, SPARC, F-RISC) and CISC (x86-64). The overhead for handling variable-length instructions and crazy referencing modes in x86-64 typically took about 20% of the die area per-core. The instruction decoder, microcode ROM, load/store unit, etc. all add lots of complication. There’s also a hit do the inherent inability of the CPU to optimize to “reduced’ instructions as well as a compiler can. X86-64 also renders the caches less efficient than in RISC architectures. A lot of the fancy shenanigans x86-64 architectures do are necessary to compensate for this stuff.
 
Everybody talking about future A chips and use it for some Mac. While not impossible it is unlikely at the moment.

I’m actually more intrigued about their in house modems and chips for wireless technology. That would give Intel and Qualcomm quite a blow.
 
Today if your project needs to run Tensorflow. the only GPU that matters is nVidia. It does not matter how fast the GPU is.

That's right. Because serious ML researchers won't be buying their own GPUs, but will soon be using racks of new TPUs and such in the cloud for training. And many SOCs, such as the automotive controllers and Apple's A11, already include ML inference engines.

In any case, nVidia appears to now already provide GPGPU/ML drivers for some future high thermal envelope Mac Pro+. That product very likely won't be using any laptop processor.

Yet another possible scenario is that Apple has designed an ARM processor chip for their data centers. They may have run into the same problem as Google, which needed to build TPUs because their x86 systems were too slow for the growth in various voice services. Also, the cost of power and cooling in data centers is greater than the cost of processor chips. So if Apple found a way to both reduce power for their server workloads, plus add the needed inference acceleration engine as they did with the A11 Bionic, it might pay off (lower total data center costs) for them to do a high-end custom ARM+ML/DNN SOC just, for their vast number of data center servers.
 
Last edited:
Correct. They will grow stronger by turning Macs to ARM.

Great. I suppose you are going to write the s/w for the new platform ?
[doublepost=1506850658][/doublepost]
What makes you think this would be suicide? Computers still need software, but there are plenty of ways to remedy that throughout this thread.

No real ways. Virtualization is not a solution especially between different platforms (e.g. where you actually have to emulate the cpu itself).
[doublepost=1506850780][/doublepost]
With the iPhone being over 80% of annual revenue, it's not the suicidal scenario it was in ~2007.

That would be true only if apple planned to switch macs to ios, though. Otherwise, it'd be a platform that would start from zero point.
 
Great. I suppose you are going to write the s/w for the new platform ?
[doublepost=1506850658][/doublepost]

No real ways. Virtualization is not a solution especially between different platforms (e.g. where you actually have to emulate the cpu itself).
[doublepost=1506850780][/doublepost]

That would be true only if apple planned to switch macs to ios, though. Otherwise, it'd be a platform that would start from zero point.

Just like when they switched from 68k to PPC and PPC to x86?

Only this time they control the programming language, the compiler, the intermediate byte code representation, the OS, the distribution mechanism, the cpu design, and, if they want, even the ISA.

And this time they have a massive software library (iOS) that will run even if the OS isn’t ios on arm - heck, it’ll run on x86 in the developer simulators, so they can surely get it to run on MacOS-arm which will undoubtedly share the same kernel and most of the same software stack. The primary difference is application kit vs UI kit, and those have been drifting closer together over the years. The good news is we will likely get touch screen arm macs and then UI kit will just work.
 
  • Like
Reactions: firewood
Oh boy. Apple already gets a tiny share of software compared to PC and they want to move to a new CPU with a new instruction set and make it harder?

Sounds swell.
 
PowerPC exists to this day. Apple did not go out of business. Exponential and Motorola did, i suppose (though parts of exponential are now part of apple’s design team, and Motorola lives on as Freescale).

FWIW: Freescale is no more; they were acquired by NXP.

Seriously?

People still using geek bench to compare a completely different platform?

The developer of Geekbench has stated numerous times that it is absolutely designed to be comparable across platforms.

https://www.reddit.com/r/pcmasterra..._scores_be_compared_across_platforms/dhc0ne4/

We've designed and built Geekbench to be comparable across platforms and architectures. So you absolutely can compare scores between your smartphone and PC.

http://support.primatelabs.com/kb/geekbench/interpreting-geekbench-3-scores

The Geekbench score provides a way to quickly compare performance across different computers and different platforms without getting bogged down in details

So I don't know what your point is.
 
I’m aware of the Freescale acquisition. Not exactly a “went out of
Business” case is all I’m saying.

FWIW: Freescale is no more; they were acquired by NXP.



The developer of Geekbench has stated numerous times that it is absolutely designed to be comparable across platforms.

https://www.reddit.com/r/pcmasterra..._scores_be_compared_across_platforms/dhc0ne4/



http://support.primatelabs.com/kb/geekbench/interpreting-geekbench-3-scores



So I don't know what your point is.
 
Just like when they switched from 68k to PPC and PPC to x86?

Only this time they control the programming language, the compiler, the intermediate byte code representation, the OS, the distribution mechanism, the cpu design, and, if they want, even the ISA.

And this time they have a massive software library (iOS) that will run even if the OS isn’t ios on arm - heck, it’ll run on x86 in the developer simulators, so they can surely get it to run on MacOS-arm which will undoubtedly share the same kernel and most of the same software stack. The primary difference is application kit vs UI kit, and those have been drifting closer together over the years. The good news is we will likely get touch screen arm macs and then UI kit will just work.

There’s a huge difference with this transition. The ‘old’ platform does not consist of really older, slower CPUs that can be emulated easily by a faster one. Another considerable difference is that almost every 3rd party company writing s/w for macs was writing also for Intel anyway. This is not the case with ARM. It is very likely that macs will lose s/w by switching, as not every s/w house will be eager to follow. There are other issues as well, like what happens with thunderbolt etc.
 
There’s a huge difference with this transition. The ‘old’ platform does not consist of really older, slower CPUs that can be emulated easily by a faster one. Another considerable difference is that almost every 3rd party company writing s/w for macs was writing also for Intel anyway. This is not the case with ARM. It is very likely that macs will lose s/w by switching, as not every s/w house will be eager to follow. There are other issues as well, like what happens with thunderbolt etc.

When you write software you generally are not “writing for intel.” Most software can simply be recompiled by flipping a switch. When was the last time anyone had to worry about the underlying ISA? SUre, there are exceptions, but most software would be easy to port.
 
Apple has a regular and a pro line-up, in their MacBooks and now iMac line up.
I can see the iMacs using x86 for a few years.
But the regular 12 inch macbook and/or the macbook air could potentially make use of an ARM architecture.
My money is on the macbook air for starters. They could release a new “ cheap” entry level macbook air with ARM, and fase out the x86, running MacOS on ARM to get development of apps/programms, started at $999, with different storage tiers. Or the macbook and the macbook air, so only the pro versions of the MacBooks get an intel cpu.
 
The MacBook Air is a legacy product. They might move the MacBook to ARM, though.
It has recently been updated with an newer cpu, so... it is not dead yet.
They are keeping it around for a reason.
They could kill the 13,3 inch and release a new 11 inch MacBook Air , with a better screen, which has the same dpi as an iPad Pro , for the same price as 12.9 iPad Pro. But with a keyboard/trackpad and without a touchscreen, with an arm based macOS. For app developers it should be a breeze to convert ipad apps to this new MacBook Air, to work with a trackpad/keyboard.

Or....(i prefer that)

Release a new keyboard cover with a trackpad, and an arm based macOS for the ipad pro’ s.
If the keyboard cover is off, the ipad will use touch based IOS, but with the keyboard/trackpad cover on, you can choose to use macOS.

Developers would be able to use the same source code, and just add a macOS gui profile.
 
Last edited:
PowerPC exists to this day.

...but not as a mainstream desktop workstation processor. Then there's DEC Alpha, SPARC, MIPS, Transmeta, Cell - either dead, dying, stagnating or - best case scenario - living on in a niche market. Once upon a time, Windows NT was available for Alpha, MIPS and PPC. Even <i>Intel</i>'s next-gen processor - Itanium - failed to compete with x86!

ARM started out as a desktop workstation processor (which just happened to be ridiculously power-frugal) - but only survived because ARM saw the writing on the wall and re-focussed on embedded and mobile applications.

However, the die was cast for that some years ago. As time passes, less and less software relies on lovingly hand-crafted assembly language or directly driving hardware. All modern operating systems have a multi-processor heritage and feature hardware abstraction. .Net, Android/Dalvik, Java platforms are all virtual-machine based, the Apple dev tools are sort of halfway there with LLVM (so all the C/ObjC/Swift/Fortran etc. front-ends are processor independent). Hardware is abstracted by frameworks like OpenGL, Metal or DirectX...

Great. I suppose you are going to write the s/w for the new platform ?

...a problem for big bloatware like Adobe CS, or maybe products that need to tinker with bare metal. However, a lot of modern software is just gonna be - theoretically - 'flip the switch' and recompile. Probably not quite so simple in reality but still a far cry from the major rewrite that may have been needed in the past.

Just like when they switched from 68k to PPC and PPC to x86?

Easier, for the reasons above.

Another considerable difference is that almost every 3rd party company writing s/w for macs was writing also for Intel anyway.

...and now, almost every company writing for Mac is also writing for iOS (ARM) and Android (mostly ARM) and/or the web (CPU agnostic) and/or server-side (largely Linux) - while any decent programmer will only be writing hardware-specific code in the few cases where it is absolutely necessary.

The ‘old’ platform does not consist of really older, slower CPUs that can be emulated easily by a faster one.

That's a problem. However, code translation technology has advanced a lot since the days of full-blown emulation a la the 68k-on-PPC or SoftWindows. Rosetta did a pretty good job of running PPC code on x86, which wasn't quite such a performance leap as 6502-68k or 68k-PPC. Also, hopefully, in this day and age, more software will have used OS frameworks rather than directly called the more esoteric CPU features.

Of course, some of it could be done in hardware - all current Intel CPUs work by translating x86/AMD64 code into internal RISC-like instructions, anyway.
 
  • Like
Reactions: Morky and dippnerd
Also, hopefully, in this day and age, more software will have used OS frameworks rather than directly called the more esoteric CPU features.
People seem to forget what age we're in now. Software has evolved a ton since the 80s/90s and even early 00's. Apple has publicly made tons of efforts to make it easier for apps to go cross-platform should the need arise. It's in their best interests regardless of any of these rumors, technology is always changing. The far future may not even consist of ARM or x86-64, so it's best to lay down a foundation that is easy to modify in code rather than depend on specific hardware.

I can't imagine Intel even wants x86 to stick around forever, they just don't want to get left behind obviously. For all we know Apple and Intel are working on something to provide a "best of both worlds" scenario, though I would find it more likely for Apple to just go at it themselves. Sooner or later we have to let go of the past (x86 legacy comes with a LOT of wasted baggage that Apple really doesn't need) and Apple has never been one to shy away from progress.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.