Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If they aim for a solution like Apple has, it might make old games MORE likely. Apple’s got some old games running, with translated x86 code, that’s more performant than those same games running native on Intel chips.
1. Apple’s graphics generally sucked pre-M1, so the bar is much lower. They only have to beat Intel (CPU & GPU) and pre-RDNA2 AMD to have superior gaming. Qualcomm’s SoC is not beating AMD RDNA2 and NVIDIA (as of now, neither is Apple. We shall see what the future holds.)
2. You know which old games don’t run on M1 in Rosetta 2? All 32-bit ones. Because Apple killed them off in Catalina, knowing they would be difficult to translate, and opting not to bother. Killing off 32-bit only for M1 would have been a stain on “it can run all your software”. But on Windows, plenty of apps and games (such as Visual Studio) are 32-bit and are not being killed off anytime soon.
3. In general, Apple has a much easier time because they have much less to be compatible with. Imagine if Rosetta 2 had to translate not just Cocoa apps for 64-bit x86, but also 32-bit, x86 Carbon, PPC Cocoa, PPC Carbon, Classic- yikes! Because that’s a lot closer to what Microsoft has to deal with. By making Windows 11 64-bit only with no 32-bit version, Microsoft is killing off 16-bit DOS apps (which could be run on 32-bit Windows 10). That’s what they’re just now cutting off. Meanwhile Apple will be 64-bit only for 3 years by then….
 
Last edited:
Wishful thinking QUALCOMM! All talk. Talk is cheap. Instead of becoming a competition… focus on collaborating.

Does the CEO not know who he’s messing with? Don't end up like Intel, please. You have a good relationship with Apple for now.

Apple is the top dog. 🍎 🌎

Qualcomm - How do you plan on competing with this? Can you imagine M1X or M2 is going to be like?

View attachment 1801103
Wishful thinking that qcom doesn’t have the ability to compete with Apple in raw cpu performance. It’s not just cpu anymore, it has to be dsp, gpu lot more components. Qcom are pioneers in soc space. The size of die matters as well. Apple has the luxury to have larger die size where as qcom must satisfy numerous form factors of android eco system with smaller die size for profitability reasons. From what I understand qcom still beats performance per watt per die size. Engineering wise I must say we cannot choose among the two.
 
How deep can Apple's competence be since they have only been in the chip business for, how long now?

And competing against the M1 is going to mean competing against the stacks and stacks of laudatory utterances from so many different sources. The 'coverage' was getting embarrassing. 'Apple is eating Intel's lunch' was a common one (I may have said something to that effect, and regret it somewhat). Some were also stating that Apple 'owned the future of silicon', whatever that means, (as if it would be possible). I started thinking that the M1 was going to be a fail because of the shear volume of chest thumping releases coming out praising it. I've heard all that before. Call me a cynic, but the more something is praised, the less I want to have anything to do with it. (I'd rather be a spectator and watch it fall, than own any of it) And now that the M1 has been out for a while, people are already looking for 'the next thing', sure that Apple will once again RULE THE WORLD OF SILICON (echo effect), meaning the hype just fell rather short. But fear not, for the cheerleaders are polishing their shoes, and fluffing those pompoms, getting ready for the next release of THE NEXT BIGGEST THING IN THE HISTORY OF HISTORY! (I wish I could add an echo effect, darn)


The above do not represent the opinion of my employer, or their agents, representatives, or any person, living or dead. No animals were harmed in the production of this post. Your mileage may vary. Subject to prior sale. Offer limited to quantity on hand. Not necessarily meant to be a negative, or positive, comment on preceeding posts. Johnny 5 is alive. Flynn lives. PSYCHO BUNNY!!!
Well there are plenty of people opionating on this subject who know nothing about it.
Then there are people like me who worked at Apple for 10 years, whose specialty was assembly-level optimization, who have spent their entire lives studying micro-architectures, who have spent the last few months reverse-engineering the M1.

It's your choice whom you want to believe...
 
Wishful thinking that qcom doesn’t have the ability to compete with Apple in raw cpu performance. It’s not just cpu anymore, it has to be dsp, gpu lot more components. Qcom are pioneers in soc space. The size of die matters as well. Apple has the luxury to have larger die size where as qcom must satisfy numerous form factors of android eco system with smaller die size for profitability reasons. From what I understand qcom still beats performance per watt per die size. Engineering wise I must say we cannot choose among the two.
Qualcomm has had years to try and has not been successful. Why will they suddenly be successful now? Because they hired a couple of folks from Apple?

And why is die size important? The package size matters, not the die size. And a bigger die, given the same number of transistors, is better because cooling is a function of die surface area.
 
  • Like
Reactions: macsplusmacs
Qualcomm has had years to try and has not been successful. Why will they suddenly be successful now? Because they hired a couple of folks from Apple?
Define "success". They provide SoCs+modems for the majority of Android phones in the USA, and the majority of flagships overall. Now, they got to this point not by having a superior product but by unfair licensing deals where they bundled in their leading modems with mediocre SoCs and pushing competitors out of the market, but it is still success. Success for Qualcomm doesn't necessarily entail making a good product.
And why is die size important? The package size matters, not the die size. And a bigger die, given the same number of transistors, is better because cooling is a function of die surface area.
Qualcomm has to sell chips and make a profit on each one. Apple doesn't have to- they sell phones/laptops. As a result, Qualcomm (and the rest of the ARM vendors) are mostly concerned with performance per area, not performance per watt. Smaller chips= less costs per chip. For Apple, making a better chip translates to a better product and better sales- just look at M1 Mac growth! For Qualcomm, with their modem monopoly, not so much. That's why Qualcomm's main talking point for Windows on ARM has not been performance, but 4G/5G connectivity- something they are actually good at. They hope to recreate their Android monopoly in a new market (it failed). After NUVIA products come to market, we shall see how they do when they actually have to compete on performance with Intel and AMD.
 
Well there are plenty of people opionating on this subject who know nothing about it.
Then there are people like me who worked at Apple for 10 years, whose specialty was assembly-level optimization, who have spent their entire lives studying micro-architectures, who have spent the last few months reverse-engineering the M1.

It's your choice whom you want to believe...

And obviously you took something that I said personally. Apologies, but I've been in the industry for over 50 years. I have 'been there before'. If I got excited for everything that came out that was supposed to 'ROCK THE WORLD!!!', I think I'd be dead from the excitement. Corporations only release what they feel will give them an edge and create buzz, and get them the media focus. Oh, and some companies come up with something that 'reaches' and they have nothing to follow up with.

I have seen, 'EARTH SHAKING TECHNOLOGY!!!' that was 'GOING TO CHANGE THE IT INDUSTRY FOR DECADES!!!' that was brain dead when it was conceived, and face planted the next week, but damn, they sure milked the hell out of that *flash* in the pan.

Until the government starts funding 'basic research', and helps drive capitali$m to real 'bigger and better things', 'reality(TM)' will be an endless wave of breathless press releases of corporations hoping no one notices that they are just feeding people what they want to hear. Apple is actually not that much different. In a back room, at Apple, someone made a decision to birth the M1, and furiously tried to back fill the mess to make it seem like they did great.

I remember an industry wonk that said that many in the industry seem to laud corporations for making 'poopies in the toilet' like a three year-old, and, well, that's so easy to achieve. I mean, my damn puppy knew within a day not to crap in the house. Is it something that requires medals? No...
 
Define "success". They provide SoCs+modems for the majority of Android phones in the USA, and the majority of flagships overall. Now, they got to this point not by having a superior product but by unfair licensing deals where they bundled in their leading modems with mediocre SoCs and pushing competitors out of the market, but it is still success. Success for Qualcomm doesn't necessarily entail making a good product.

Qualcomm has to sell chips and make a profit on each one. Apple doesn't have to- they sell phones/laptops. As a result, Qualcomm (and the rest of the ARM vendors) are mostly concerned with performance per area, not performance per watt. Smaller chips= less costs per chip. For Apple, making a better chip translates to a better product and better sales- just look at M1 Mac growth! For Qualcomm, with their modem monopoly, not so much. That's why Qualcomm's main talking point for Windows on ARM has not been performance, but 4G/5G connectivity- something they are actually good at. They hope to recreate their Android monopoly in a new market (it failed). After NUVIA products come to market, we shall see how they do when they actually have to compete on performance with Intel and AMD.

I don’t have to define success. Qualcomm’s CEO did. That’s what we are talking about here. And success, as he defines it, has never been achieved before, so why would it now?

As for “they have to make a profit on each one,” the small difference in die size between M1 and 888, for example, doesn’t make much difference in profit, especially given TSMC’s yield curves (you pay per wafer start, not per die). And even if it did, that’s just more of a reason that Qualcomm can’t compete. Explaining why they can’t compete does nothing to prove that they will be successful in competing.
 
  • Like
Reactions: macsplusmacs
That’s true, but M1 also uses a more advanced process node (TSMC 5nm vs 7nm), big.LITTLE, and has an optimized OS. It’s not a completely fair fight.
If RISC is so much better than CISC, why did Apple have to switch from a RISC architecture (PowerPC) to a CISC one (x86), specifically because of power efficiency? Most of the advantages of ARM would apply to PPC as well.
cmaier has answered some points, let me answer others.

First the issue of RISC vs CISC is mostly unimportant. It's a stupid fight from thirty five years ago that's mostly irrelevant to anything today, and is kept alive by the fact that it allows a certain class of people to feel like they understand an issue they actually do not understand at all.

Let's take Moore's law seriously. 2x the transistors every two years. So thirty years, fifteen doublings, 32 THOUSAND more transistors! Do you really think arguments about how best to design a CPU are still relevant when you have THOUSANDS more transistors available to throw at the problem?

ISA is just not *that* important. Ideally you want an ISA that's a better match to today's languages, SW design methodologies, and CPU design techniques, but you can work around many things. IMHO ARMv8 is substantially better than RISC-V which is substantially better than x86, but these are not dispositive.
More important is a cluster of more nebulous things:
- do you try to obtain speed via IPC (simultaneously executing many instructions in the same cycle) or via GHz?
Intel has always pushed GHz, was badly burned by this with P4, appeared to learn some sense in the immediate P4 years (the years, precisely, that Apple switched to Intel -- no coincidence there) and then, as a new generation of management and engineers took over, forgot everything they had learned from the P4 debacle.

To use a rough analogy, Apple designs CPUs using Swift, Intel designs them using x86 assembly. In other words Apple designs the CPU (and SoC) at a high level, and uses tools to convert that high level design into, ultimately, a transistor layout; Intel designs at a much lower level, closer to the transistors.
In theory you can write faster code in assembly -- but in practice what happens is
+ you become incapable of making LARGE changes because that's just too much work. You can make small patches of code run fast, but you can change the algorithmic structure of your code to a better algorithm
+ writing and validating your code takes far more man power and far longer.
That's the trap Intel finds itself in. Because they have pushed GHz so hard, they find it extremely difficult to engage in even small redesigns of their cores, let alone total rewrite from scratch. Meanwhile Apple can, in a sense, tell the compiler "perform a hash table lookup here", then next year turn that into "let's change that hash table to an array" and make a substantial design change, while the compiler does all the low level work.

By luck or by design Apple realized at just the right point that going forward transistors were not getting any faster, but they were still getting denser. So Apple built their entire design methodology on "use as many transistors as you like, but don't waste effort in super specialized circuit techniques to run your clock cycle faster".

- what do you imagine you are selling? Apple were selling iPhones -- so they added whatever they wanted to the iPhone to make it better. This included not just CPUs, but ISPs, security, a variety of accelerators, always-on coprocessors, etc.
Intel always say themselves as selling a chip that runs x86 code, and weren't much interested in adding anything else. Eventually they did, but in usual Intel fashion, too little too late. For years their GPUs were a joke. They claimed about ten years ago to add an NPU but it was a pathetic little thing that could slightly accelerate a specialize voice model, it was in no way what we think of as a GPU. The security stuff has been an ongoing disaster because they insist on doing it as part of x86 not as a separate piece of the chip. In the same way, no interest in accelerators, low-power co-processors, functional DMA, all the other stuff.

- Apple don't waste effort. Apple (to simplify) designs one core each year. The pattern APPEARS to be (this is a guess, but it looks like) they design something new from scratch every four years. This is, in a sense, a massive overdesign in that some parts seem way bigger than they need to be. Then, over the next three versions (as more transistors become available) they fill in the pieces that at first seemed too small. Then another design from scratch. The small cores appear to be reaparameterized versions of the large cores (same general design, just use one of everything rather than two of everything!). Compare to Intel who (even apart from the idiocy of the zillions of official SKUs) design Xeons, desktop, mobile and atom. And claim they will share between these but seem to do a terrible job of that actual sharing.
Apple understand that you don't need to DESIGN a cheap version of a chip, you just use the version from last year or the year before. That works better for customers, for developers, for Apple.

So that's Intel's issues:
- obsession with GHz at the expense of IPC
- obsession with providing a zillion market segments with a zillion different SKUs
- obsession with x86 as the expense of looking beyond the CPU

In technical terms, you need to look at what makes cores fast nowadays. Intel has these problems
- the design is almighty complex, with all manner of strange interactions, and an Intel promise that none of this will ever change. This makes design and validation take forever (made worse by the low level design techniques) and means they live in terror of large changes
- the variable instruction length makes fetching and decoding ever more difficult as you go wider. I suspect they also make many of the details of implementing branch prediction more difficult.
- some especially stupid design choices around things like flags and partial registers make the flow of a modern OoO machine just that much harder
- the memory model (ie the rules around exactly how loads and stores can be re-ordered relative to each other, and when these memory value changes have to be communicated to other CPUs) limits many of the neatest optimizations that can be performed in the load/store/cache system which is probably the most difficult part of a modern machine



But ULTIMATELY almost all of this boils down to personality!
Some individuals are comfortable with change, and are willing to make large changes for the sake of an eventual improvement.
Some individuals think continuity is most important, and are willing to engage in a lot of work to maintain continuity.
Apple has ALWAYS been a change company, has attracted change engineers, and has acquired change developers and customers. (If you are unhappy with constant small changes, you just won't be part of the Apple world for more than a few years!)
Intel (and MS) have always been continuity companies.

Obviously I'm on the side of Apple in this. Constant small change is irritating, yes, but not as irritating as the continuity people claim. My view is the continuity people have a dangerous view of reality. By assuming they can create a world of no visible change, they actually create an extremely fragile world, one where code and hardware is not updated annually to match new realities. And so we get situations where twenty year old software fails, and no-one knows how it works. Or companies are attacked because of security issues from twelve years ago that were never fixed.

Regardless of that, these "RISC vs CISC" arguments are REALLY mainly not about technology but about "lifestyle". Should companies and individuals accept constant small pain because it allows for constant improvement? Or should we accept massive inefficiency so that we can all pretend the tech world is static, and code once written never needs to be updated? Don't be fooled -- this is a bikeshedding argument, not a tech argument. If you want to learn technology, you have to ask tech questions; ANY question that can be converted into a "personality" question will devolve into uninteresting nonsense.
 
  • Like
Reactions: OldCorpse
I don’t have to define success. Qualcomm’s CEO did. That’s what we are talking about here. And success, as he defines it, has never been achieved before, so why would it now?

As for “they have to make a profit on each one,” the small difference in die size between M1 and 888, for example, doesn’t make much difference in profit, especially given TSMC’s yield curves (you pay per wafer start, not per die). And even if it did, that’s just more of a reason that Qualcomm can’t compete. Explaining why they can’t compete does nothing to prove that they will be successful in competing.
I'm not saying they will be successful. In fact, I think even if they could make a competitive chip, it would still be difficult to compete on Windows without backwards compatibility. But saying that Qualcomm "has not been successful" in chipmaking depends on how you define success. If they do make a decently performing chip, I think they could gain a lot of share in the Chromebook market, where support for Windows programs doesn't matter. Personally, I think this acquisition is more about reducing dependence on Arm's designs because of the NVIDIA takeover.

A for profit per chip, Qualcomm does cheap out on cache, using much less than Apple. Hard to believe that's not about increasing profits at the cost of performance.
 
I'm not saying they will be successful. In fact, I think even if they could make a competitive chip, it would still be difficult to compete on Windows without backwards compatibility. But saying that Qualcomm "has not been successful" in chipmaking depends on how you define success. If they do make a decently performing chip, I think they could gain a lot of share in the Chromebook market, where support for Windows programs doesn't matter. Personally, I think this acquisition is more about reducing dependence on Arm's designs because of the NVIDIA takeover.

A for profit per chip, Qualcomm does cheap out on cache, using much less than Apple. Hard to believe that's not about increasing profits at the cost of performance.

Increasing the size of a cache doesn’t always increase performance - bigger caches have longer read and write latency. A smart guy I know wrote a book that discusses that.

 
I designed x86 chips, including the first 64-bit chip (where I designed the 64-bit extensions to the integer math instructions, as well as the design of various parts of the chip itself). I’ve also designed RISC chips (the first was a unique design roughly similar to MIPS, then a PowerPC chip which was the faster PowerPC in the world, and second in speed only to DEC Alpha, then a Sparc chip that was the designed to be the fastest SPARC in the world).

So I’ll explain why your statement is wrong.

First, there is zero proof of an x86 performance advantage - in fact, the *first time* that anyone used custom-CPU design techniques (the same techniques that are used at, for example, AMD and Intel) to try and design an Arm chip to compete with x86, it succeeded magnificently (that chip is M1). Prior to M1, nobody actually tried to make an Arm chip to compete with the heart of the x86 market.

Second, there are inherent technical disadvantages to x86. The instruction decoder takes multiple pipeline cycles beyond what Arm takes, and that will always be the case. As a result, every time a branch prediction is wrong, or a context-switch occurs, there is a much bigger penalty when the pipeline gets flushed. It also means that it is much harder to issue parallel instructions per core, because variable-length instructions mean you can’t see as much parallelizable code in a given instruction window size. We see this play out in Apple’s ability to issue, what, 6 instructions per cycle per core, vs. the max that x86 could ever possible do of 3 or 4. And we see in the real world that Arm allows many more in-flight instructions than x86.

Third, it does no good for x86 to do things in 3 steps vs Arm’s 20 when each of those 3 steps really is 7 internal steps. And that is what happens with x86. An instruction is not an instruction. An x86 instruction will, most of the time, be broken down in the instruction decoder into multiple sequential micro-ops, using a lookup table called a microcode ROM. So those 3 instructions end up being 20 microcode instructions. Each is then processed the same way that an Arm chip would process its own native instructions. The only difference is that Arm chips don’t need to spend the time, electricity and effort to do that conversion, and because the incoming instructions are pre-simplified, it makes it much easier for Arm chips to see far into the future and to parallelize whatever can be parallelized as the instructions come in.

The reason “complex’ instructions like x86 were once an advantage had nothing whatsoever to do with performance - it was done that way because RAM used to be very expensive, so encoding common strings of instructions into complex instructions saved on instruction memory. At one point it also might have provided a slight speed boost, too, because there were no instruction caches back then, so fetching less stuff from memory might have given a slight speed improvement in certain situations (rare ones. It’s a latency issue only on unpredicted branches).

And, of course, the extra hardware to deal with x86 instruction decoding is not free - on the cores I designed, the instruction decoder was 20% of the core (not including caches). That’s big. On the risc hardware I designed, the decoders are much tinier than that. In addition to space, that means power is used, and circuits that want to be next to each other have to be farther apart to make room for it.

Another inherent advantage of Arm over x86 is the number of registers. x86 has a very small number of registers - I can’t, as I sit here, think of a modern architecture with fewer. For the same reasons as you want to avoid instruction memory accesses, you also want to avoid data memory accesses (only more so!). And the fewer registers you have, the more often you will have to perform memory accesses. That’s simply unavoidable. Each memory access takes hundreds of times longer (or thousands or more if you can’t find what you need in the cache) than reading or writing to a register. And because x86 has so few registers, you will have no choice but to do that a lot more than on Arm.

Why does x86 have few registers? Again, for historical reasons that no longer apply. Whenever you have a context switch (e.g. you switch from one process to another) you need to flush the registers to memory. The more you have, the longer that takes. But now we have memory architectures and buses that allow writing a lot of data in parallel, so you can have more registers and not pay that penalty at all. So why didn’t x86-64 just add a ton more registers? (It added some). Because the weird pseudo-accumulator style x86 way of encoding instructions couldn’t benefit from them too easily, and still allow compilers to easily be ported to it.

So the tldr; version:

1) the only “proof’ out there is that Arm has a technological advantage over x86
2) RISC is better than CISC, given modern technological improvements and constraints
3) there are specific engineering reasons why this is so
Waste of typing.

1)
I designed x86 chips, including the first 64-bit chip (where I designed the 64-bit extensions to the integer math instructions, as well as the design of various parts of the chip itself). I’ve also designed RISC chips (the first was a unique design roughly similar to MIPS, then a PowerPC chip which was the faster PowerPC in the world, and second in speed only to DEC Alpha, then a Sparc chip that was the designed to be the fastest SPARC in the world).

So I’ll explain why your statement is wrong.

First, there is zero proof of an x86 performance advantage - in fact, the *first time* that anyone used custom-CPU design techniques (the same techniques that are used at, for example, AMD and Intel) to try and design an Arm chip to compete with x86, it succeeded magnificently (that chip is M1). Prior to M1, nobody actually tried to make an Arm chip to compete with the heart of the x86 market.

Second, there are inherent technical disadvantages to x86. The instruction decoder takes multiple pipeline cycles beyond what Arm takes, and that will always be the case. As a result, every time a branch prediction is wrong, or a context-switch occurs, there is a much bigger penalty when the pipeline gets flushed. It also means that it is much harder to issue parallel instructions per core, because variable-length instructions mean you can’t see as much parallelizable code in a given instruction window size. We see this play out in Apple’s ability to issue, what, 6 instructions per cycle per core, vs. the max that x86 could ever possible do of 3 or 4. And we see in the real world that Arm allows many more in-flight instructions than x86.

Third, it does no good for x86 to do things in 3 steps vs Arm’s 20 when each of those 3 steps really is 7 internal steps. And that is what happens with x86. An instruction is not an instruction. An x86 instruction will, most of the time, be broken down in the instruction decoder into multiple sequential micro-ops, using a lookup table called a microcode ROM. So those 3 instructions end up being 20 microcode instructions. Each is then processed the same way that an Arm chip would process its own native instructions. The only difference is that Arm chips don’t need to spend the time, electricity and effort to do that conversion, and because the incoming instructions are pre-simplified, it makes it much easier for Arm chips to see far into the future and to parallelize whatever can be parallelized as the instructions come in.

The reason “complex’ instructions like x86 were once an advantage had nothing whatsoever to do with performance - it was done that way because RAM used to be very expensive, so encoding common strings of instructions into complex instructions saved on instruction memory. At one point it also might have provided a slight speed boost, too, because there were no instruction caches back then, so fetching less stuff from memory might have given a slight speed improvement in certain situations (rare ones. It’s a latency issue only on unpredicted branches).

And, of course, the extra hardware to deal with x86 instruction decoding is not free - on the cores I designed, the instruction decoder was 20% of the core (not including caches). That’s big. On the risc hardware I designed, the decoders are much tinier than that. In addition to space, that means power is used, and circuits that want to be next to each other have to be farther apart to make room for it.

Another inherent advantage of Arm over x86 is the number of registers. x86 has a very small number of registers - I can’t, as I sit here, think of a modern architecture with fewer. For the same reasons as you want to avoid instruction memory accesses, you also want to avoid data memory accesses (only more so!). And the fewer registers you have, the more often you will have to perform memory accesses. That’s simply unavoidable. Each memory access takes hundreds of times longer (or thousands or more if you can’t find what you need in the cache) than reading or writing to a register. And because x86 has so few registers, you will have no choice but to do that a lot more than on Arm.

Why does x86 have few registers? Again, for historical reasons that no longer apply. Whenever you have a context switch (e.g. you switch from one process to another) you need to flush the registers to memory. The more you have, the longer that takes. But now we have memory architectures and buses that allow writing a lot of data in parallel, so you can have more registers and not pay that penalty at all. So why didn’t x86-64 just add a ton more registers? (It added some). Because the weird pseudo-accumulator style x86 way of encoding instructions couldn’t benefit from them too easily, and still allow compilers to easily be ported to it.

So the tldr; version:

1) the only “proof’ out there is that Arm has a technological advantage over x86
2) RISC is better than CISC, given modern technological improvements and constraints
3) there are specific engineering reasons why this is so
Waste of typing, buddy.

1) There’s no advantage over x86 except for power efficiency.
2) CISC is better than RISC for reasons pointed out before.
3) Nope
 
How is ARM better than x86? It’s been proven many times before that x86 has a performance advantage over ARM. What ARM does in 20 steps, x86 can do in 3.



If not see the volumes of evidence online, offline, and in the CPU. or just read cmaier's insightful posts.
 
Waste of typing.

1)

Waste of typing, buddy.

1) There’s no advantage over x86 except for power efficiency.
2) CISC is better than RISC for reasons pointed out before.
3) Nope

Apparently it is a waste of typing.

What reasons pointed out before - fewer instructions to accomplish the same work? But you know that is not how it works. An x86 “add contents of memory A to accumulator” may be one instruction, whereas Arm requires “(1) load contents of memory A to register 1, (2) add register 1 to register 2 and put results in register 2,” but each takes the same number of total cycles. In fact, on any x86 machine of the last 25 years, the x86 instruction will be converted to 2 microcode ops that look pretty much like the 2 Arm instructions. But doing that conversion takes at least 2 clock cycles, meaning you have to add stages to the pipeline. So the supposed “advantage” is a disadvantage, not an advantage.
 
Looks like Intel Employees have entered the chat.

I’m happy to be proven wrong and to learn something new. I’m much more familiar with x86 cpu design than Arm, so maybe there is some problem with Arm I’m not aware of. But when folks just reply to a detailed post with “wrong” and no explanation, one should question their motivation.
 
Haha, haven't we heard this before....🤔


(iPhone posed no threat.)
LOL - yes. Apple has created a lots of stuff that just doesn't work. However, when they hit - its a bases loaded home run.

The iPhone is a great example of this - completely and totally destroying billion dollar companies with "years of experience" in the industry.

I think what they have done (and will do) with the M1 chip will be just as distruptive as the iPhone. QualComm is so far behind they will never catch up. While they are looking a getting a M1 type of chip to the market, Apple is already launching the next generation - and the generation after that has already been designed.
 
What reasons pointed out before - fewer instructions to accomplish the same work? But you know that is not how it works.
Heh, the problem is that they DON’T know that’s how it works. They don’t know how any of it works. :) I appreciate your ability to break down complex ideas to something more easily digestible, but you can only go so far. Some folks are fact intolerant!
 
Wishful thinking QUALCOMM! All talk. Talk is cheap. Instead of becoming a competition… focus on collaborating.

Does the CEO not know who he’s messing with? Don't end up like Intel, please. You have a good relationship with Apple for now.

Apple is the top dog. 🍎 🌎

Qualcomm - How do you plan on competing with this? Can you imagine M1X or M2 is going to be like?

View attachment 1801103
Remember, always cheer for the underdog.
 
Wishful thinking QUALCOMM! All talk. Talk is cheap. Instead of becoming a competition… focus on collaborating.

Does the CEO not know who he’s messing with? Don't end up like Intel, please. You have a good relationship with Apple for now.

Apple is the top dog. 🍎 🌎

Qualcomm - How do you plan on competing with this? Can you imagine M1X or M2 is going to be like?

View attachment 1801103

So Qualcom says Apple can’t compete with them in modem chips but Also says that Qualcom can compete with Apple in ARM chip design?

Ok … bit of a lesson of sorats here.


How long has Qualcomm been making mobile CPU’s for comparatively to Apple or Intel?
- Longer than both.

How long has Qualcomm been making mobile CPU’s based off ARM Holding Inc’s designs - a bit longer than Apple or PI Semi (which many team members are part of Srouji’s team; although some have left).
> KEY point here is there are 4 former Apple Tech engineers that helped design A series chips up to A12X so NOT something to make light fan-boy/fan-girl jokes on here.


The last 4yrs have shown and taught us:
Microsoft worked with Qualcomm on the Surface X and X2 which BOTH use Qualcomm’s early mobile Arm based chips BUT limited to Microsoft’s Windows 10 for Arm - lots of key functionalit’s was missing.

Microsoft partnered with Samsung on their laptops - software is king but the Queen is the chip!
- many lessons learned here - not just with synchronization with Android phones to Windows OS but which led to Windows 11 having Amazon store (to be manually downloaded per end user) giving Android phones to run on Windows (not all but most of the very common apps will be available).

We do NOT know how long it took Apple to get iOS apps running on M1 or Apple Silicon chips - but I’ll bet it’s a LOT less time than Apple took as they needed to build a chip for this to work on their Mac‘s … Microsoft, Samsung nor Qualcomm had Not to do the same.

The key I’m making here is …. Qualcomm has learned a LOT and under new management, CEO, the vigour of pushing new frontiers is ripe. Qualcomm has VERY good engineers there don’t insult them. Also their chips based on ARM can be their OWN or from ARM‘s design’s whichever is better but they’ll learn from ARM’s chips if they‘re better than their own and exponentially make them better. We‘re not certain Apple will do the same down the road.

HISTORY:
Apple sold us early Apple I computer running a Motorola chip (I think).
Apple sold us the Mac’s for years that did NOT perform better than intel (circa G2-G3 potentially G4 in it’s later years).
Apple sold us the Power Mac G5 which was a BEAST not just the CPU and I hold than with Panther dear to my heart.
yet after 6 mths Dell had a competing XEON system.
>> after that G5 and Dual G5 systems 970something was the last hoorah, Apple conceded Intel was better. That lasted LESS than the Motorola+IBM+Apple relationship that ended up just being Apple+IBM during the Power 5 or PowerPC G5 era.

Now we’re praising the Apple Silicon M1 chip.
- how long will this praise and performance leap over others last for?!
- How long will Srouji be at Apple for?! What‘s his weakness or his team members’ weaknesses to be incentivized to leave?!
- Is there someone out there better than Srouji in USA or Isreal or the world? Srouji is from Israel for reference.

If you haven’t noticed the last 3 -4 decades Apple vs Microsoft has been at a 12-8yr flip flop of who‘s got the best OS:
macintosh vs Microsoft DOS, System 4 vs 3.1, OSX vs Windows 7/8, OS X vs Windows 10 and soon macOS 11/12 vs Windows 11. Microsoft doesn’t update their OS as often as Apple due to legacy requirements in The corporate world via their Active Directory, server, domain etc support cycles.

Apple doesn’t have those restrictions. Well some due to Rosetta 2 for another 1.4yrs to go.
Qualcomm with upcoming ARM based chip design’s don’t have the restrictions, or legacy junk that Intel keeps hanging onto with their x86-64bit chips as does AMD with desktop/mobile chips - their server class chips are nice though. Qualcomm doesn’t have to keep legacy in hardware and you can be sure Microsoft will work with them heavily to make Windows 11 VERY lean for ARM.

I want Apple to continually WIN … but I’m worried, maybe a bit afraid, that the model of 30% store fees is killing the once glorious untarnishable brand name of Apple in the mud too long. EPIC, FB, their consortium, and now USA, EU and other governments are looking to pull Apple’s way of doing business while DRAINING their financially resources in litigation to bleed them low is the end result! Sure the bonus is absolutely no fees to developers of any size is what THEY are after, but it’s part of a bigger obstacle course here and I’m worried because …

I liked purchasing music from Apple iTunes:
- they kept album art, they kept some unique features of the old cassette/CD/LP world there. I don’t purchase tracks anymore - this new world of music and artists hasn’t given me real reason to OWN music. Dont’ get me wrong I WANT to purchase music to properly support talent I LOVE music!

I liked purchasing movies:
- I still do as some movies just went to junk with “remastered” , the biggest farce was G.Lucas’ remake of Episode 4,5,6 of Star Wars into theatres then BluRay/DVD with minutes of retarded aliens spliced here and there with NO real benefit to the plot, movie story board or the end enjoyment. Worse with some movies changing the theme music. NO. Original is were its art or directors cut (like Snyder cut of Justice League vs that JUNK Buffy producer did. I mean come-on Buffy to DC comics movie to real life actors?! NO, Snyder ALL the WAY). This is NOT the same as Netflix, Disney+ etc … as it was with digital music.

We’re not in a world where streaming, less ownership of anything from a Millenials perspective is as young adults not having a lot of income (originally when in high school, or college having to move back home) has changed the worlds view of ownership.

This will heavily translate to:
How computers will be purchased 5-10yrs from now,
how we do NOT own cars - yup subscription fees slightly cheaper than lease,
how we think about purchasing or owning software and it’s lifetime of support scope:
- Adobe is full subscription now. Microsoft has joined with their Office Suite getting a LOT less years of support than before 2016 - no more security updates on Office 2010, 2013, 2016 but they just released Office 2019 (3yrs old by date) ??? They have a sub for online use?!

Now … what if foundry’s start charging perpetual fees to manufacturers or company’s that make computers? Imaging paying for a cpu in your mac up to 6yrs (Vintage) AFTER you’ve already purchased it? Don’t laugh many of the recent changes I laughed at (streaming music no longer owning. Same for movies but NetFlix rental boxes at Blockbuster or Columbia House DVD/CD/VHS got my generation used to that prior to going fully digital), purchasing water - yeah back in 1984 my Gr5 teacher warned us this was coming. We laughed thought he was funny great teacher but cooChoo. Here we are.

So Apple vs Qualcomm on desktop/laptop CPU’s … may currently seem like it’s a joke or they should partner (how really don’t know), but it’ll be a real thing soon enough.
 
Ok … bit of a lesson of sorats here.


How long has Qualcomm been making mobile CPU’s for comparatively to Apple or Intel?
- Longer than both.

How long has Qualcomm been making mobile CPU’s based off ARM Holding Inc’s designs - a bit longer than Apple or PI Semi (which many team members are part of Srouji’s team; although some have left).
> KEY point here is there are 4 former Apple Tech engineers that helped design A series chips up to A12X so NOT something to make light fan-boy/fan-girl jokes on here.


The last 4yrs have shown and taught us:
Microsoft worked with Qualcomm on the Surface X and X2 which BOTH use Qualcomm’s early mobile Arm based chips BUT limited to Microsoft’s Windows 10 for Arm - lots of key functionalit’s was missing.

Microsoft partnered with Samsung on their laptops - software is king but the Queen is the chip!
- many lessons learned here - not just with synchronization with Android phones to Windows OS but which led to Windows 11 having Amazon store (to be manually downloaded per end user) giving Android phones to run on Windows (not all but most of the very common apps will be available).

We do NOT know how long it took Apple to get iOS apps running on M1 or Apple Silicon chips - but I’ll bet it’s a LOT less time than Apple took as they needed to build a chip for this to work on their Mac‘s … Microsoft, Samsung nor Qualcomm had Not to do the same.

The key I’m making here is …. Qualcomm has learned a LOT and under new management, CEO, the vigour of pushing new frontiers is ripe. Qualcomm has VERY good engineers there don’t insult them. Also their chips based on ARM can be their OWN or from ARM‘s design’s whichever is better but they’ll learn from ARM’s chips if they‘re better than their own and exponentially make them better. We‘re not certain Apple will do the same down the road.

HISTORY:
Apple sold us early Apple I computer running a Motorola chip (I think).
Apple sold us the Mac’s for years that did NOT perform better than intel (circa G2-G3 potentially G4 in it’s later years).
Apple sold us the Power Mac G5 which was a BEAST not just the CPU and I hold than with Panther dear to my heart.
yet after 6 mths Dell had a competing XEON system.
>> after that G5 and Dual G5 systems 970something was the last hoorah, Apple conceded Intel was better. That lasted LESS than the Motorola+IBM+Apple relationship that ended up just being Apple+IBM during the Power 5 or PowerPC G5 era.

Now we’re praising the Apple Silicon M1 chip.
- how long will this praise and performance leap over others last for?!
- How long will Srouji be at Apple for?! What‘s his weakness or his team members’ weaknesses to be incentivized to leave?!
- Is there someone out there better than Srouji in USA or Isreal or the world? Srouji is from Israel for reference.

If you haven’t noticed the last 3 -4 decades Apple vs Microsoft has been at a 12-8yr flip flop of who‘s got the best OS:
macintosh vs Microsoft DOS, System 4 vs 3.1, OSX vs Windows 7/8, OS X vs Windows 10 and soon macOS 11/12 vs Windows 11. Microsoft doesn’t update their OS as often as Apple due to legacy requirements in The corporate world via their Active Directory, server, domain etc support cycles.

Apple doesn’t have those restrictions. Well some due to Rosetta 2 for another 1.4yrs to go.
Qualcomm with upcoming ARM based chip design’s don’t have the restrictions, or legacy junk that Intel keeps hanging onto with their x86-64bit chips as does AMD with desktop/mobile chips - their server class chips are nice though. Qualcomm doesn’t have to keep legacy in hardware and you can be sure Microsoft will work with them heavily to make Windows 11 VERY lean for ARM.

I want Apple to continually WIN … but I’m worried, maybe a bit afraid, that the model of 30% store fees is killing the once glorious untarnishable brand name of Apple in the mud too long. EPIC, FB, their consortium, and now USA, EU and other governments are looking to pull Apple’s way of doing business while DRAINING their financially resources in litigation to bleed them low is the end result! Sure the bonus is absolutely no fees to developers of any size is what THEY are after, but it’s part of a bigger obstacle course here and I’m worried because …

I liked purchasing music from Apple iTunes:
- they kept album art, they kept some unique features of the old cassette/CD/LP world there. I don’t purchase tracks anymore - this new world of music and artists hasn’t given me real reason to OWN music. Dont’ get me wrong I WANT to purchase music to properly support talent I LOVE music!

I liked purchasing movies:
- I still do as some movies just went to junk with “remastered” , the biggest farce was G.Lucas’ remake of Episode 4,5,6 of Star Wars into theatres then BluRay/DVD with minutes of retarded aliens spliced here and there with NO real benefit to the plot, movie story board or the end enjoyment. Worse with some movies changing the theme music. NO. Original is were its art or directors cut (like Snyder cut of Justice League vs that JUNK Buffy producer did. I mean come-on Buffy to DC comics movie to real life actors?! NO, Snyder ALL the WAY). This is NOT the same as Netflix, Disney+ etc … as it was with digital music.

We’re not in a world where streaming, less ownership of anything from a Millenials perspective is as young adults not having a lot of income (originally when in high school, or college having to move back home) has changed the worlds view of ownership.

This will heavily translate to:
How computers will be purchased 5-10yrs from now,
how we do NOT own cars - yup subscription fees slightly cheaper than lease,
how we think about purchasing or owning software and it’s lifetime of support scope:
- Adobe is full subscription now. Microsoft has joined with their Office Suite getting a LOT less years of support than before 2016 - no more security updates on Office 2010, 2013, 2016 but they just released Office 2019 (3yrs old by date) ??? They have a sub for online use?!

Now … what if foundry’s start charging perpetual fees to manufacturers or company’s that make computers? Imaging paying for a cpu in your mac up to 6yrs (Vintage) AFTER you’ve already purchased it? Don’t laugh many of the recent changes I laughed at (streaming music no longer owning. Same for movies but NetFlix rental boxes at Blockbuster or Columbia House DVD/CD/VHS got my generation used to that prior to going fully digital), purchasing water - yeah back in 1984 my Gr5 teacher warned us this was coming. We laughed thought he was funny great teacher but cooChoo. Here we are.

So Apple vs Qualcomm on desktop/laptop CPU’s … may currently seem like it’s a joke or they should partner (how really don’t know), but it’ll be a real thing soon enough.

A smart refrigerator had a 'fee' to keep the smart part working. HAH! The real 'smart part' is never buying that POS, and getting a new technology 'dumb fridge'.

"Honey, the toaster isn't working. Did you pay the licensing fee last month? You remember what happened with the toilet, right?"
 
  • Like
Reactions: DeepIn2U
A smart refrigerator had a 'fee' to keep the smart part working. HAH! The real 'smart part' is never buying that POS, and getting a new technology 'dumb fridge'.

"Honey, the toaster isn't working. Did you pay the licensing fee last month? You remember what happened with the toilet, right?"

well by that thought I guess many are in for a surprise with their smart TVs? I don’t own one myself but getting a dumb TV isn’t so easy anymore.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.