Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The PowerPC was originally the work of the "AIM Alliance": (A)pple, (I)BM, and (M)otorola. I recall IBM and Motorola both produced G3 and G4 processors at different times. IBM was the main company behind the G5, and I recall (at least based on its announcement) that IBM built a US-based fabricating plant specific for that processor.

At the same point in its existence, PowerPC (and other RISC architectures, like SPARC and Alpha) also showed incredible promise, and the common thought during that time was that these various architectures were going to supplant x86 in the very near future. Even Intel was trying to replace x86 with a successor when they introduced Itanium. Apple was the only manufacturer to turn out a consumer computer running a "modern" RISC CPU.

My only point with this was just that what we're seeing and hearing about ARM supplanting x86 and Apple taking the lead with an incredible CPU design that is leaving "old, inefficient" x86 architecture in the dust echos what was being said ~25 years ago.

There are a some very important differences between then and now.

First, ARM performance is real. It's demonstrated its as faster or faster than the best Intel and AMD can produce.

Second, it's great for laptops and that it runs with low power and run cool even at high performance.

Third, the M1 Max and Ultra have shown it can scale to the highest end desktops.

Fourth, it has a high production volume that keeps it's unit costs low. In combination with iPads and iPhones, Apple is producing over 200M ARM chips a year.

Perhaps ARM will hit a performance wall, where its single or multicore performance can't improve without massive power increases. But that wall is clearly beyond market needs the next few years. And this time Apple won't be a small customer of IBM without the pull to get them to invest more in solving the problem. This time Apple controls their design destiny and is a huge TSMC customer, they have all the tools to solve these problems if they arise.

Qualcomm producing anything in the same performance and energy use ballpark as the M1 is an existential threat to Intel and AMD. It's not speculation that ARM is more efficient, it's common benchmarks that it uses one third the power at the same performance levels. Its also clear that process size is not a significant part of the difference, each process step is 10-15% more efficient, not 70%.

Microsoft can't compete profitably in the premium laptop market if Surface laptops have less than half the battery life and lower performance. No PC vender can. They'll continue to lose share and that hurts their unit costs, making them even more uncompetitive. If Intel can't close the gap they have to have an alternative or just cede most of the over $1,000 market to Apple. They won't do that, and they'll go all-in on Qualcomm's new CPUs rather than burn alive with Intels offerings.

And Intel/AMD also can't continue to let Graviton and other ARM servers grow over 100% a year in the hosting market either for long. I'm a value investor, Intel is cheap but even if odds of this risk are only one in ten, thats too much risk.
 
This is a good thing (competition).
Indeed saw this pic attached by an Android fan of Qualcomm chips ~ made me laugh and reminisce about the comic short of Apple fans for new iPhone buying frenzy lol.
"Began, the ARM wars have" :) like others have said, competition is good!
“RISC is good!”
 

Attachments

  • 917B85A7-80D0-45E6-A589-114BD01A3987.jpeg
    917B85A7-80D0-45E6-A589-114BD01A3987.jpeg
    189.1 KB · Views: 103
The other side of this coin is “you can’t run old apps that might still be perfectly useful and your business might depend on.”
Apple needs to value compatibility more IMO. Yeah it's not great to keep all those 32-bit libs around and bloat up the OS, but there should still be some way (even if much slower) to run old stuff when needed. Newer Macs can't run 32-bit Mac programs, but ironically my M1 Mac mini can run 32-bit Intel Windows programs thanks to PlayOnMac with Wine 32on64!
 
Last edited:
There are a some very important differences between then and now.

First, ARM performance is real. It's demonstrated its as faster or faster than the best Intel and AMD can produce.

Second, it's great for laptops and that it runs with low power and run cool even at high performance.

Third, the M1 Max and Ultra have shown it can scale to the highest end desktops.

Fourth, it has a high production volume that keeps it's unit costs low. In combination with iPads and iPhones, Apple is producing over 200M ARM chips a year.

Perhaps ARM will hit a performance wall, where its single or multicore performance can't improve without massive power increases. But that wall is clearly beyond market needs the next few years. And this time Apple won't be a small customer of IBM without the pull to get them to invest more in solving the problem. This time Apple controls their design destiny and is a huge TSMC customer, they have all the tools to solve these problems if they arise.

Qualcomm producing anything in the same performance and energy use ballpark as the M1 is an existential threat to Intel and AMD. It's not speculation that ARM is more efficient, it's common benchmarks that it uses one third the power at the same performance levels. Its also clear that process size is not a significant part of the difference, each process step is 10-15% more efficient, not 70%.

Microsoft can't compete profitably in the premium laptop market if Surface laptops have less than half the battery life and lower performance. No PC vender can. They'll continue to lose share and that hurts their unit costs, making them even more uncompetitive. If Intel can't close the gap they have to have an alternative or just cede most of the over $1,000 market to Apple. They won't do that, and they'll go all-in on Qualcomm's new CPUs rather than burn alive with Intels offerings.

And Intel/AMD also can't continue to let Graviton and other ARM servers grow over 100% a year in the hosting market either for long. I'm a value investor, Intel is cheap but even if odds of this risk are only one in ten, thats too much risk.
Um. You got something wrong here


IBMs PowerPC 5 which with work with Apple became the G5 was super incredible performance - in fact Appe rightfully claimed abs proven THE most powerful desktop computer at launch for almost a year before Intel matched with dual CPU’s as well and Del publicly whined and complained for that ad to be pulled some 10mins later.

So there was no difference then about risc based cpus being more powerful. The difference then was the huge power consumption because the G5 was based on a server architecture.

Unit costs I think are still very high for Apple hence the reason to place in the iPad pros and air. Especially the latter because that target market doesn’t need 8GB of RAM, 6GB usable where the Pro iPad market has that need.

Like you I worry about single core performance being stagnant. M1-M1 Ultra hasn’t had a significant bump in performance yet then the software hasn’t been refined (on windows or macOS) to take better advantage of single core computing. Maybe that’s just what it is I’m not sure.

I agree with most of what you stated.

The ONLY threat to Intel/AMD that Qualcomm will ever be producing modern Arm chips is only when Microsoft has serious confidence and push to get rid of legacy code to super it. Then you’ll see Intel drooping that on their chips as well but until then we’ll wait abs see what Dell, HP and Lenovo do with their lineups abs how many revisions Qualcomm makes for laptops and desktops in the same vain as what Apple has done.
 
  • Like
Reactions: Klyster

Thankfully this time around, Apple is in incredible shape to stay ahead of Qualcomm "ZOMG IF WE CAN GET WINDOWS RUNNING WELL THEN BUH BUY INTEL"...
 
As someone who actually lived through the PowerPC days, this whole song and dance sounds eerily familiar.

I remember Apple crowing about the "inefficient" Pentiums of the time. The PowerPC architecture was supposed to be this massively power-efficient speed demon that "toasted" those poor Intel bunnies and let Apple users run desktops and laptops at speeds that x86 users could only dream of. All those benchmarks showing how the "Megahertz Myth" meant that those 233 MHz G3s were so much faster than 450MHz Pentium II's, and used much less power.

We know how that whole thing ended up. Nearly constant CPU production shortages, "Wind Tunnel" G4s, loud liquid-cooled G5s, no 3GHz machines, and no G5 laptops. And then an inevitable switch to the supposedly "inferior" x86 processor line.
As a SWE and not a CPU designer, I'm not gonna trust that one thing is going to succeed for some very technical reason (usually RISC vs CISC) that I don't have hands-on experience with.

So I prefer looking at economics: The M1 line is based on iPhone processors. There are way more iPhones in the world than Macs, and way more ARM-based smartphones in general than x86 PCs. PPC was niche in comparison, with the industry way more firmly betting on Intel x86. PPC could've even been a better architecture, idk, but it didn't matter when all that hardware design was going the other direction.
 
Last edited:
I like competition, but what is the purpose of this? Apple has the chip, the hardware, the software, and made a product you can actually buy and use. Maybe some vendor put this chip in notebook hardware, but what OS and software will be installed and who should buy these devices?
 
There will be another attempt at Windows on ARM (or Windows on Snapdragon) via the Lenovo X13s this year, before the Nuvia-designed chips come out in 2023:
https://www.lenovo.com/us/en/p/coming-soon/thinkpad-x13s-13-inch-wos/len101t0019

A nice effort.

Notice the terrible tolerances on build quality = lid closure clearances are terrible.

Mention of business focus with Win11 Pro BUT nothing bout what applications are supported which is a huge factor with arm windows.

I didn’t know there was a gen3 Qualcomm arm chip. Thanks for sharing.
 
in 2023-2024, Apple will be far ahead. Qualcomm CEO needs to know Apple is already working on M2 and M3 Apple Silicon Chip, 😝
How do you know their launch product won't already exceed the M2? You don't. Doesn't mean it will... but just saying.

Microsoft developing Windows to work with ARM chips seemed a bit odd a few years back considering ARM at the time....but that groundwork is done. The issue Qualcom will have are Windows apps that don't run on ARM and a lack of a Rosetta like translation tool for Windows more than the chips. Microsoft's biggest failure with their ARM adventure was lack of apps (particularly pro apps).
 
Fun to see the comparisons with the old PPC G5 products. But to be honest, Apple was still shipping huge beige boxes at that time. The modern Macintosh, an integrated product where silicon and stylish externals work in combination, was not born until Steve Jobs returned and made the first iMac.

You can see the influence of Steve Jobs and Jony Ive even in the most recent products such as the M1 iMac, where a strong design is coupled with function and with strong performance. The silicon enables the slim design, it’s an iconic product.

We will have to see if Qualcomm and Microsoft can compete on that level, even if they can match the silicon. Stylistically they are aimed at a different market, they will try and make slick black laptops in a minimalistic style for the business user.
 
  • Haha
Reactions: Da_Hood
Why is the Nuvia SOC supposed to compete with the M1 and not the M2? Has Qualcomm said that their Nuvia SOC will compete with the M1?
I think everyone wrongly assumes this. Those former employees have roadmap knowledge of some kind of what Apple is working on. Plus Apple publishes a lot of info on how these chips work.

I think Qualcomm will put out a good chip.
 
Intel rival, not Apple. Other chip designs have existed forever and Apple still maintains its market share. But Intel and AMD should be at least a little worried - Qualcomm is certainly not doing this for an OS other than Windows.

Their statement that they are shooting to compete against Apple is very telling though - they see them as #1. That has to hurt for Intel.
You forgot about Linux. The latest Ubuntu is pretty darn amazing, and it runs as a VM on M1 Macs under parallels.
 
This is a good thing, especially if it benefits Linux on ARM and software becomes more optimized for that instruction set. My only hope is that this chip also out performs Intel. I could care less if it's better or worse than M1 or M2 chips. If ARM can become a more viable platform for Linux and Windows, this only would only benefit macOS as well. Especially in terms of better support for typical UNIX tools where x86 is currently more widely supported and ARM is a distant second.
 
Qualcomm producing anything in the same performance and energy use ballpark as the M1 is an existential threat to Intel and AMD. It's not speculation that ARM is more efficient, it's common benchmarks that it uses one third the power at the same performance levels. Its also clear that process size is not a significant part of the difference, each process step is 10-15% more efficient, not 70%.

That "one third power" is not primarily because of the instruction set. There is a major silicon design component to that. AMD and Intel don't build entirely "laptop first" CPU core designs. AMD is relatively clearly server first with "tickle down" to laptops. ( same chiplet used in desktop and server design. Just clock setting and different I/O chips. ) Intel is a bit more laptop focused but there too the designs are leaning toward "maximum turbo overclocking" so as to win tech press 'porn' benchmarks and have product to sell to the "tricked out super cooler" enthusiast crowd.

"... Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated. ... "
https://www.anandtech.com/show/1704...hybrid-performance-brings-hybrid-complexity/5

If the "x86 decoder" is idled (or turned off) 80% of the time how is it going to consume 30% more energy than Arm decoder. It isn't even 'on'

Neither AMD/Intel's x86 nor Apple's M1 series directly execute the instruction sets that are being hyped here. They all do micro-ops. And there, there isn't some humungous gap. Behind the front end decoder is where most of the power is being consumed on "high performance" load. If actually add/subtracting/multiplying/dividing at the highest intense pace ... that is going to consume lots of power. .



Secondly, it isn't really "apples to apples" comparison when AMD, Intel, and Apple are all on different process nodes. And also where are even inside of a node ( do they choose to stretch the clocks past where the baseline for node design is to appeal to the overclocker crowd. )

Node differences highly contribute to differences in the sizes of L1/L2/L3 caches. The main internal bus design. etc.


Thirdly Apple throws some functionality out the window to conserve power. Modular Memory? toast (AMD/Intel systems do DDR4/DDR5 just fine). Modular dGPUs.. so far in M1 line ? toast ( chuck x16 PCI-e bandwidth provisioning out the window).


Finally, x86 is long overdue for retiring some of the "tangent forays" from the 80's , 90's from the instruction set. Either from "high performance execution speed" ( deprecated into "slow speed mode" ) or just all together ( does an instruction set really need 3-4 SIMD opcodes sets? Probably not. MMX could die and it won't really be an issue for modern 64-bit optimized code. ). A contributing factor to the decoder power consumption problem is wasting power on decoding instructions that statistically almost never come in normal user usage. A contributing reason why Apple went to 64-bit Arm faster than everyone else was not so much to using 64-bit pointers or memory addressing... it was to "throw away" 90's Arm instruction options overhead they saw little point in dragging along.

Yes the variable length x86 opcode prediction 'tree' has more overhead than a more regularly structured opcode design. But also do not have to make the prediction 'tree' pragmatically gratuitously as constipated as possible. legacy x86 is a Constipated Instruction set more than it is a "Complex" Instruction set. It is not the variability that is top problem at this point. It is 'hoarding' of stuff not particularly using. Yes, there will still be instruction decode overhead for the still quite large set, but at least would be for instructions with real "value add" for most users.



[ both AMD and Intel have relatively large number of CPU SKUs. There are some sub populations that are running 90's design embedded OS/apps for equipment and other things. Or 'stuck' in some early 2000's Windows variant for some proprietary app. There could be legacy plain x86 processors for that code. And something slimmed down for Windows 11 (or > 11 ) and post 2010-11 apps that have modern foundations. ]


And Intel/AMD also can't continue to let Graviton and other ARM servers grow over 100% a year in the hosting market either for long. I'm a value investor, Intel is cheap but even if odds of this risk are only one in ten, thats too much risk.

If the market is growing in units enough so that Intel and AMD are not really dropping much in their units sales that it is no where near as dire as you point. Arm servers have high triple digit growth because their relative size is small. Going from 50,000 to 100,000 units year over year is 100% growth, but does that really do major damage a 50M/year market? Not really.

Similar issues here though too. Graviton (and the other Arm Neoverse baseline design cores ) are not single threaded, single user (and non virtualized ) , "drag racing" speed daemons. Is 90's era MMX opcodes really making 64-bit Apache/Ngnix workloads run lots faster?

AMD's upcoming Bergamo isn't completely dropping the x86 instruction set to target cloud. Probably more internal design choice changes than there is dropping/replacing x86 opcodes version the mainstream server. ( wouldn't be surprising to see AVX-512 skipped (or implemented without top speed to completion as a priority), but also it will not surprising to see SMT dropped. No "super duper turbo" mode either. That latter two have nothing to do with instruction sets. Both of those design choices will help to get the core implementation size down, but will be less well rounded across possible general market workloads. That is OK because won't really sell this core into those markets. )

It is not so much AMD/Intel are "letting" the Arm server offerings come into the market as much as the market is somewhat balkanizing. Just don't throwing generic, "does everything", 1U pizza box modules at the whole data center.

The era of having one server chip that does "everything for everybody" is more detached from market realities ( Intel offers. Xeon SP , Xeon D. , Atom C processor packages aimed at different server markets.). It is going to get harder for Intel to try to make "everything for everybody" all 100% in-house.

Graviton is bigger threat to AMD/Intel more so not because it is an Arm instruction set , but that Amazon doesn't care about profit margins on Graviton directly. The service needs to make a profit; not Graviton in and of itself. That is a dual edged sword though. Also means Graviton won't have much of an impact outside the service it is embedded in.
 
Why is the Nuvia SOC supposed to compete with the M1 and not the M2? Has Qualcomm said that their Nuvia SOC will compete with the M1?

Qualcomm never said M1. That is a bit of the "telephone game" going on where the story keeps getting passed along and at some point the story changes as 2nd hand becomes 3rd hand that becomes 4th hand .... reports of what Qualcomm said.

Here is a slide deck from their presentation in June.



Nuvia.jpg



Qualcomm wants to compete with M-series. Most likely with some subset of the whole series. Since there was only a M1 at the time many folks turned that into the M1. Is Qualcomm going to do "Monkey see, monkey do" for the entire M-series line up? Probably not. Especially if going to drag around a modem subsystem on all of their offerings.

Is the M2 going to get a modem on the package? Probably not. So not entirely 1-to-1 directly competing with the M2 either. ( Qualcomm is going to throw transistor budget at the modem and Apple is not. If both commit to approximately the same size die size there there will be trade-offs in performance among the subsystems. )

To some extent Qualcomm probably thinks Apple is also going where Qualcomm has already been going to; a die/package with a modem subsystem on it. Qualcomm is likely aiming to be competitive in the overlap where both an M-series instance has a modem on package and Qualcomm has a modem on package.

Qualcomm is not trying to complete with a specific point in time. It will be a competitive offering to the M-series. And the M-series will change over time. So their offering will change over time also. It is pretty doubtful that there is one, and only one, design they are working on. Probably not equal efforts on each, but also really can't afford to just do only one at a time.

Also more than pretty likely they'll still be working with Arm baseline cores in some of their product line up (just not the Windows specially targeted one. )


The majority of this die when it arrives in 2022-23 in not going to be Nuvia cores. The GPU, DSP/AI/ML subsystem, the modem are probably going to consume far more die space than the Nuvia cores will. Most of it is going to be longer term Qualcomm planned out stuff with the Nuvia cores 'tacked on'.


The Snapdragon 8cx Gen 3 was/is generally competitive with the Intel/AMD mainstream laptops chips from 2020 ( Gen 11 and Ryzen 4000-5000). All Qualcomm needs is an option that keeps a x86 Windows user from migrating to macOS M-series to by offering a decent Windows-on-Arm option instead. I highly doubt Qualcomm is primarily out to try to pull Mac users off of M-series and into Windows. Qualcomm's offering doesn't have to beat every single M-series instance on every tech porn benchmark. It just has to be good enough and native Windows. (at the time of the presentation the "M-series" tag carried with it "faster than Intel/AMD x86" connotation. That is primarily all they are really going for here. That is more complicated at this point when get into the zone of multithreaded bragging rights decoupled from battery life and presence of a celluar radio )

Qualcomm really , really ,really wants to sell modems/radios. It skews how they look at all products; even Windows PCs.

[ P.S. The M1 iPad Pro with Qualcomm radios. the M1 iPad Air with Qualcomm radios. When the M-series package takes out those radios... Qualcomm wants to have a answer for that with Windows (and maybe large screen optimized Android). ]
 
Last edited:
And next to no profit from any of them.
Sure, but my argument is not about near-term financial outcomes. We're still in the early days of the digital age. Ford in 1922 had a much more dominant advantage in autos than Apple has now in i-devices. Somehow the market dominance of Ford behind the model T did not persist over time and Apple's won't likely either. All good for consumers.
 
"real change" for fewer and fewer every quarter.

Apple has declined from ~30% of global smartphone market share in the mid-10's to ~20% now. And this is with several of the large Chinese manufacturers literally banned from the US.
 
Apple has declined from ~30% of global smartphone market share in the mid-10's to ~20% now. And this is with several of the large Chinese manufacturers literally banned from the US.

and they are growing now.

And for every dollar of profit made off the phone market.

Apple takes 70-90 cents of it.

I would totally hate to be apple and their market share now! lol!
 
  • Like
Reactions: Abazigal
First time I'm seeing MacRumors members NOT trash the competition in the comments on the first page. Finally some maturity.
We'll trash the competition. We are Apple fans. We won't trash the fact there is competition. Competition is important to keep everyone honest.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.