Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would say unless Qualcomm forms an alliance with an OS company (e.g. Microsoft, Google etc) it will be a tough ride - they might want to play nicely with the Linux folks, but again that is a small market.
They could wrap a nice fool-proof UI around Linux and try their own Chromebooks.

However all of this seems desperate - custom Silicon is really where Apple‘s totally integrated business model shines …
 
ARM is an SoC architecture. There is no reason why Intel can't produce SoC and of course they are working on them already.
The ARM is an ISA and has nothing to do with the physical manifestation of the instruction set. It definitely has nothing to do with “System on Chip”. I’m pretty sure the first ARM chips were not SoC.
 
who would have thought Apple USED Qualcomm in the past. All it took for them to 'crack' was a lawsuit, now they back beating their own.
 
I do wonder if the Windows gaming market could flip to Arm more easily than expected - given Nvidia at the core of it all. Current GPUs are very power hungry and generate a lot of heat, so a relatively cool, power sipping Arm CPU could free up a lot of TDP for even more insatiable graphics cards. A number of games are successfully ported to the Switch, using an old and pretty weak Arm chip. I think if the hardware starts appearing, game studios might embrace it quicker than you'd expect going forward? Not to mention if the PS6 and Xbox nextnewnamingscheme also switch over...
I think software-wise, games will be the most likely to port of those three. Being mostly dependent on GPU and DirectX (which runs on ARM), they probably don’t have as much x86 dependencies as the others. Still, you need to consider that:
1. Current and past generation consoles run on x86 SoCs. ARM would be another thing to port for.
2. A big part of PC gaming is desktop, where thermals are of no concern, and expandability is key- which can’t be done with an ARM SoC that Qualcomm is making.
3. Old games will likely never get ported, which would be a huge deal, and limit ARM gaming as a platform.

Newer games probably could be ported to ARM, but why would you do it as a developer? Your user base doesn’t care about thin-and-light laptops that benefit from ARM, and the other platforms you target (console) are x86. And especially when Microsoft themselves doesn’t seem very serious about the whole effort.
 
Until the enterprise, government and many others use and are dependent on x86 software, Intel does not need to be afraid of competitors.

Sad.

We have these HP Elite windows tablet laptops for work, and they are so thin that heat dissipation becomes an issue when you are doing anything heavier than word processing.

I have lost count of how many of my colleagues have run into screen burnout issues due to excessive zooming using these devices. The only reason why it hasn’t been more is because some of my colleagues switched to using their own computers while at home.

Seems like an issue with windows tablets is then you place the screen right over the processor. A problem that I don’t experience with ipads and my M1 MBA because the A-chip is just so efficient.

I really don’t see how Intel can pull itself out of the rut that it is currently in, and it just continues to mean lousier products that we have to put up with at work.
 
Qualcomm’s already working with Microsoft. They can say SOME things that INFER a deeper more substantive engagement, but they can’t come out and say “Windows 11 will be ready for ARM when it ships!”
I mean, we have seen the results of that already, the Surface Pro X used custom SQ1 and SQ2 chips that were designed in partnership with Microsoft. They just weren’t very good, at least not compared to the M1 devices.

As long as Windows is supporting multiple chips and licensing to oems and not only supporting 1-2 chips but rather the diverse array of different hardware used in windows pcs, it will be impossible to make silicon for windows that is efficient as the M1 is for apple’s OS’.

Designing the hardware and software in conjunction with each other is a massive advantage that simply can’t be overstated.

Plus they are starting from behind. They may catch up eventually, but AT BEST their 1st gen “M1-class” chip might best an M1 slightly… in 1-2 years when they are 2-3 generations beyond that.

Plus, let’s not forget, the M1 is still just a smol chip, Apple has lots of headroom to just scale up in addition to actual per core performance which of course is likely to improve too with each generation.
 
So Qualcom says Apple can’t compete with them in modem chips but Also says that Qualcom can compete with Apple in ARM chip design?
You noticed that too.

They can’t compete with us due to our legacy, but we can compete with them because we stole some employees?

Doesn't Apple poach employees for the same reason?

They may be right. But I think it’s put up or shut up.

As for Apple, they make great portable device chips. They haven’t proven that they can put together anything with workstation class specs: cores, graphics, memory addressing, bus architecture. Until they prove themselves there, Apple Silicon is just a mobile chip running low end laptops and desktops.
 
Interesting.
Makes me wonder how my raspberry pi and bananapi seem to work, since there aren‘t any OSs that run on non-x86 hardware.

Is that what Qualcomm meant when they said they will take on Apple Silicon? BananaPi ?
I guess better sell those AAPL stocks and get some Qualcomm.

Not to mention Android, iOS, MacOS... y'know just >50% of computing devices in use globally today...

Android is already running Qualcomm chips, MacOS+iOS is out of the picture because thats the competition of Qualcomm. So we are left with Windows and the 1.9% or so Linux users world wide.
 
Exactly this, I promised to never buy another Mac(Previous mac was 2012 MBA) but the M1 Mini was too tempting. I hate Mac OS but I put up with it because the value for the M1 Mini was too great, if Microsoft provides support for M-series Macs, I will keep buying them.

If Qualcomm can come close to M series performance and Microsoft supports these chips I will buy a PC with a Qualcomm chip. If the Performance sucks I will just buy M series Macs and hope that Microsoft will sell windows license for them. IMO Intel messed up, I loved my Dell Axim V51, they should never have ditched arm, they could have owned the market if they did not sell their xscale division.
I need better graphics and 32GB RAM and I will bite. Well I need 16GB right now, but I keep my computers 1 or 2 years past when new OSX versions stop supporting it, so being stuck with 16GB for 7 years is a no go. My 2012 MBP is at 16GB now but it didn’t start that way.
 
ARM is an SoC architecture. There is no reason why Intel can't produce SoC and of course they are working on them already.
What is a “SoC” architecture?

SoC is a design methodology, not an architecture. It simply means that physical designs are done with re-use in mind, so that you can combine separately-designed blocks.
 
  • Like
Reactions: opeter
@huge_apple_fangirl I think it makes a pretty big difference. The folks at Apple have similar skill to the folks at AMD, and use similar design methodologies. So if you want to see what apple could do with an x86 license, you can get a pretty good idea by looking at what AMD has. Power/watt, M1 destroys it.
 
  • Like
Reactions: Unregistered 4U
@huge_apple_fangirl I think it makes a pretty big difference. The folks at Apple have similar skill to the folks at AMD, and use similar design methodologies. So if you want to see what apple could do with an x86 license, you can get a pretty good idea by looking at what AMD has. Power/watt, M1 destroys it.
That’s true, but M1 also uses a more advanced process node (TSMC 5nm vs 7nm), big.LITTLE, and has an optimized OS. It’s not a completely fair fight.
 
That’s true, but M1 also uses a more advanced process node (TSMC 5nm vs 7nm), big.LITTLE, and has an optimized OS. It’s not a completely fair fight.

Yeah, but you can compare node-to-node by looking at A-series chips - remember that the M1 per core performance per watt more or less matches the A-series.

Another way to look at it is that even discounting M1’s performance/watt by say, 30%, due to the node difference, doesn’t come close to eating up the difference.
 
3. Old games will likely never get ported, which would be a huge deal, and limit ARM gaming as a platform.
If they aim for a solution like Apple has, it might make old games MORE likely. Apple’s got some old games running, with translated x86 code, that’s more performant than those same games running native on Intel chips.
 
because the more ARM shows how much better it is than intel in most cases, the more people that will switch to macs and later, windows 11 on ARM
How is ARM better than x86? It’s been proven many times before that x86 has a performance advantage over ARM. What ARM does in 20 steps, x86 can do in 3.
 
How is ARM better than x86? It’s been proven many times before that x86 has a performance advantage over ARM. What ARM does in 20 steps, x86 can do in 3.

I designed x86 chips, including the first 64-bit chip (where I designed the 64-bit extensions to the integer math instructions, as well as the design of various parts of the chip itself). I’ve also designed RISC chips (the first was a unique design roughly similar to MIPS, then a PowerPC chip which was the faster PowerPC in the world, and second in speed only to DEC Alpha, then a Sparc chip that was the designed to be the fastest SPARC in the world).

So I’ll explain why your statement is wrong.

First, there is zero proof of an x86 performance advantage - in fact, the *first time* that anyone used custom-CPU design techniques (the same techniques that are used at, for example, AMD and Intel) to try and design an Arm chip to compete with x86, it succeeded magnificently (that chip is M1). Prior to M1, nobody actually tried to make an Arm chip to compete with the heart of the x86 market.

Second, there are inherent technical disadvantages to x86. The instruction decoder takes multiple pipeline cycles beyond what Arm takes, and that will always be the case. As a result, every time a branch prediction is wrong, or a context-switch occurs, there is a much bigger penalty when the pipeline gets flushed. It also means that it is much harder to issue parallel instructions per core, because variable-length instructions mean you can’t see as much parallelizable code in a given instruction window size. We see this play out in Apple’s ability to issue, what, 6 instructions per cycle per core, vs. the max that x86 could ever possible do of 3 or 4. And we see in the real world that Arm allows many more in-flight instructions than x86.

Third, it does no good for x86 to do things in 3 steps vs Arm’s 20 when each of those 3 steps really is 7 internal steps. And that is what happens with x86. An instruction is not an instruction. An x86 instruction will, most of the time, be broken down in the instruction decoder into multiple sequential micro-ops, using a lookup table called a microcode ROM. So those 3 instructions end up being 20 microcode instructions. Each is then processed the same way that an Arm chip would process its own native instructions. The only difference is that Arm chips don’t need to spend the time, electricity and effort to do that conversion, and because the incoming instructions are pre-simplified, it makes it much easier for Arm chips to see far into the future and to parallelize whatever can be parallelized as the instructions come in.

The reason “complex’ instructions like x86 were once an advantage had nothing whatsoever to do with performance - it was done that way because RAM used to be very expensive, so encoding common strings of instructions into complex instructions saved on instruction memory. At one point it also might have provided a slight speed boost, too, because there were no instruction caches back then, so fetching less stuff from memory might have given a slight speed improvement in certain situations (rare ones. It’s a latency issue only on unpredicted branches).

And, of course, the extra hardware to deal with x86 instruction decoding is not free - on the cores I designed, the instruction decoder was 20% of the core (not including caches). That’s big. On the risc hardware I designed, the decoders are much tinier than that. In addition to space, that means power is used, and circuits that want to be next to each other have to be farther apart to make room for it.

Another inherent advantage of Arm over x86 is the number of registers. x86 has a very small number of registers - I can’t, as I sit here, think of a modern architecture with fewer. For the same reasons as you want to avoid instruction memory accesses, you also want to avoid data memory accesses (only more so!). And the fewer registers you have, the more often you will have to perform memory accesses. That’s simply unavoidable. Each memory access takes hundreds of times longer (or thousands or more if you can’t find what you need in the cache) than reading or writing to a register. And because x86 has so few registers, you will have no choice but to do that a lot more than on Arm.

Why does x86 have few registers? Again, for historical reasons that no longer apply. Whenever you have a context switch (e.g. you switch from one process to another) you need to flush the registers to memory. The more you have, the longer that takes. But now we have memory architectures and buses that allow writing a lot of data in parallel, so you can have more registers and not pay that penalty at all. So why didn’t x86-64 just add a ton more registers? (It added some). Because the weird pseudo-accumulator style x86 way of encoding instructions couldn’t benefit from them too easily, and still allow compilers to easily be ported to it.

So the tldr; version:

1) the only “proof’ out there is that Arm has a technological advantage over x86
2) RISC is better than CISC, given modern technological improvements and constraints
3) there are specific engineering reasons why this is so
 
Well of course with the amazing reception to the M1 people will look to a Windows equivalent solution. Windows has a far greater demo and incentive for someone to take a cash cow from Intel. The problem is Windows and a demo that is so entrenched in legacy and unwillingness to risk the tried and true.
The technology is there, it’s the legacy software, developers and institutional consumers that are not. Apple with it’s relatively tiny market share could be so bold as to rip off the bandaid. It’s not so much about Qualcomm, but Windows has to implement it’s own Rosetta 2 type strategy to pull it off.
 
2) RISC is better than CISC, given modern technological improvements and constraints
3) there are specific engineering reasons why this is so
If RISC is so much better than CISC, why did Apple have to switch from a RISC architecture (PowerPC) to a CISC one (x86), specifically because of power efficiency? Most of the advantages of ARM would apply to PPC as well.
 
If RISC is so much better than CISC, why did Apple have to switch from a RISC architecture (PowerPC) to a CISC one (x86), specifically because of power efficiency? Most of the advantages of ARM would apply to PPC as well.

Because nobody (not Motorola, not IBM, and not Exponential) was trying to make a power efficient PPC chip. Our goal, back then, was to make the highest possible performing chips. The PowerPC I worked on was 500MHz, which at the time was second only to DEC’s Alpha. Nobody in the PowerPC world was thinking about making low-power chips.

Hell, my chip had bipolar circuits! They were literally designed to burn constant power, regardless of usage, so they could be faster.

Instruction set Architecture is not the only thing that matters - that’s why AMD beats Intel despite using the same ISA. But all things equal, you’re much better off with RISC than CISC, and M1 proves it (and M2 will prove it more).

A “PowerPC” version of M1 would have very similar performance per watt as M1.

I’m glad they went with Arm, though, because PowerPC has some other weird quirks that made designing them a headache - due to a historical fluke that dates back to IBM’s mainframe processors, the bits are numbered backwards! It was really confusing designing the floating point unit when bit 0 is the high order bit, and so it represents a different power of 2 depending on which floating point representation is being used.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.