Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I know a lot of people here do not like Intel, but we need them, and competition is a good thing
Agreed. I have slammed / made fun of Intel recently, but is because of how far (in my opinion) they have slid.

And although I am all-in for Apple, I want Apple to get stiff competition so they need to keep improving their products and not rest on their laurels. Plus competition for Apple means good products for people who don't want to buy Apple stuff.
 
  • Like
Reactions: jz0309
"we will be relentless in our pursuit of Moore's Law"

Erm, Intel stood still, as far as I know Intel didn't innovate for the most part of the last decade, more like milking the (Fat) cow.
 
Spot on! Seems many here are not aware of Qualcomms history, its founders, and the modern communications processing techniques and technology they developed going back many years, starting with Dr. Andrew Viterbi. They deserve a ton of credit for what we take for granted today.

It was a brilliant move Qualcomm purchasing Nuvia along with Apple's CPU and SOC architect Gerard Williams, rounding out their portfolio going forward. And I'm looking forward to seeing what they come up with.
Without Viterbi there is modern wireless communication. Trellis algorithms and other techniques make modern wireless possible.

Qualcomm pioneered and still is a leader in wireless. People can talk about the Apple 5G modem product but it's vaporware until it's not.

They bought the Intel modem team and some managers and executive that had fled Qualcomm. What they didn't get was the real brains behind the algorithms and the architects.

I know some of them.
People here that call Qualcomm a troll don't understand the contribution.

People that think just because TSMC calls their process 5nm means it's the highest transistor density.

Dismiss a Qualcomm/Intel partnership at your own peril. For a competitor to dismiss this as a real threat would be crazy.
 
  • Like
Reactions: citysnaps
Intel moving on form x86 is a given imo. That's why I had another thread, where I think it really sucks buying an x86 laptop now when the chip maker itself (intel) is transitioning away. I bet in 5 years or so, intel would start abandoning x86. We can already see the writing on the wall when Microsoft put such stringent CPU requirement on Windows 11, thus making the upgrade cycle more often for consumers in the future.

Moving on from. x86 ( 32-bit)? There is a pretty good chance that is a question of "when they should do it" versus "will they every do it".

Moving on from. x86_64 entirely? Probably not. Intel already has a small foot in the ARM world because they have Altrix FGPA products. That will probably not move for more than several years. ( If RISC-V takes off though I wouldn't be surprised if they followed it. ) . As some point thing what Intel has to dump is the ability to run software from the 1990's ( i.e. previous century). That has gone from being an inertia benefits to a inertia boat anchor of largely wasted die space. BIOS is another antiquated boat anchor that they should have shaken loose several years ago.


The interrupt system is dragging in part because it is so old.


AMD proposes some bad-aid for the broken parts and Intel has a mode switch to turn it off. Or.... how about just lopping the really old stuff off into the 'trash can' . That solves most of the problem too.

3-5 different generations of partially overlapping vector instructions. ( MMX, SSE , SSE2 , SEE4 , AVX , AVX2 , ... ). Imagine if not dragging last century baggage around on that front too. That would have a bigger impact on some applications, but there are likely some simplifications that can be made there.

By 2006 , x86_64 was pretty well percolated into all the mainstream CPUs. So by 2026 there will be 20 years of ground level software inertia built up and sales that were significantly higher than the previous 20 years. Intel's "Lunar Lake" that should surface around 2023-25 that Jim Keller (and others) had some time to put some "rethink" direction on the microarchitecture ... it would not be surprising if that "rethink" has some major clean up present.

Windows 11 is going to be 64 bit only. There are probably some folks who still want 32-bit embedded OS over the next several decades but that could be a "side show" product line for Intel. Windows , ChromeOS , macOS , and mainstream Linux don't necessarily need 32 bits. ( Android probably doesn't either, but I haven't looked at that issue for that OS in a long time.)


Intel doesn't have to throw the entire baby out with the bath water... but dragging around poopy bath water and diapers doesn't make much sense after a while.

That's one factor that Apple (and a few others ) have pragmatically pushed on ARM. ARM is full blown into a dump 32-bit transition. ARM 9 architecture has already done it for 2 of the 3 major subgroupings of core designs. (next years iteration X2 and A510 won't , but A710 will still have 32bit. Apple has already completed desupport. ) . Can folks still build Arm 8 designs 3-10 years from now? Sure. But does every single Arm instance have to have 32-bit present? Nope.


Will a "32-bit garbage collection" on x86_64 make it exactly as neatly uniform sized instruction code as most of Arm instructions? No. However, it doesn't need to be. Part of Intel's problem is that they have gone to far into the "do everything for everybody" zone. So one instruction implementation that is good for. the folks who want to sit and squat in the 1990's forever mixed in with the folks trying to compete against 2030 designs upcoming around the corner.
 
  • Like
Reactions: Unregistered 4U
Qualcomm is just as bad. They have control over the US Android SoC market because they bundled modems in with their SoCs, letting them push out competitors like TI, Nvidia.

In what alternative universe was Nvidia ever a major player in the smartphone market?

This is a little bit of revisionist history. TI just took basic ARM designs largely unchanged (on central cores) and coupled an Imagination Tech GPU to them and fabricated them ( mostly on contract to outside fabs.. no fab value add either). There was some DSP and a bit of camera additions but nothing hyper competitive. Qualcomm was doing custom Arm cores ( had architecture license ) early on. (if not before Apple got one). They also bought mobile GPU off AMD/ATi ( 2009). Qualcomm GPUs named "Adreno" which is an anagram of "Radeon". In comparison, TI put about zero. "value add" on top of what they got on the two major cores implementations. TI at one point had a placement in a Samsung phone. That was doomed long term too with Samsung having their own both CPU and modem teams.

Qualcomm in house Arm architecture development stumbled on the 32-bit to 64-bit transition. Qualcomm had planned to take longer ( viewing 64-bit through a server variant lens) , but Apple looked at 64-bit not as a chance to using more address bits but to dump old Arm opcodes they didn't want or need. Didn't help either that some takeover troll was trying to suck money out of the company ... so to cut costs Qualcomm want Arm designs with very minor tweaks. TI could have jumped forward there is they applied money and resources ... but they didn't.

Nvidia never was a serious player in part because they dragged over their desktop skewed designs for their GPU. it doesn't matter what cellular modem you have attached if they can't make the thermal/battery life design cut. Basically same problem Intel had trying to squeeze x86 into smartphones only that was the CPU part of the SoC. Intel/Nvidia made some adjustments, but software stack, major focus points, etc were largely elsewhere.

It would have been easier for. TI and Nvidia to limp along in the bottom half of the Android market if Qualcomm discrete modems were cheaper. But again. MediaTek survived just fine. There is a bit of hand waving here discounting the impact that MediaTek , Samsung, Huawei/HiSlicon had on the baseline SoC for budget-midrange phones worldwide. ( No way TI or Nvidia could just be a USA Android maker and survive. )



Without any alternatives they have stagnated and are not remotely competitive with Apple.

No alternative modems? Samsung , MediaTek , and others sell modems. The major problems for those is that for CDMA-A/B ( Verizon and Sprint) there were no were near as good as Qualcomms offerings.

But the notion that "couldn't make any market" is somewhat dubious when Apple started off and still is using discrete modems. There is a major problem making super-discount budget phones with discrete modems.

Infineon missed the call on the transition to 4G ( in part trying to avoid Qualcomms patents at the core of 4G's CDMA encodings. ). Intel bought them right after Apple dumped them ( due to couldn't to one worldwide modem . ). All that isn't really Qualcomm's fault.


Their long term support for their SoCs is atrocious too. At least Intel actually manufactures things. The only thing Qualcomm manufactures is patents and lawsuits.

Google's function decomposition of the Android kernel to assist in proper support for long term upgrades has been slow also. Qualcomm is worse, but not the only "cook" who has been screwing up long term support infrastructure for last 10 years.
 
Without Viterbi there is modern wireless communication. Trellis algorithms and other techniques make modern wireless possible.

Qualcomm pioneered and still is a leader in wireless. People can talk about the Apple 5G modem product but it's vaporware until it's not.

They bought the Intel modem team and some managers and executive that had fled Qualcomm. What they didn't get was the real brains behind the algorithms and the architects.

I know some of them.
People here that call Qualcomm a troll don't understand the contribution.

People that think just because TSMC calls their process 5nm means it's the highest transistor density.

Dismiss a Qualcomm/Intel partnership at your own peril. For a competitor to dismiss this as a real threat would be crazy.
I think you meant “there is NO modern wireless communication…”
 
Why Intel is so confident that they will gain "process performance leadership by 2025”? It is far behind TSMC and Samsung at present, and 2025 is not that far away. Did it get an endorsement from a superpower, say, NSA?

Intel has lots of money. They are getting the first NA-EUV fabrication machine from ASML. (so probably paying to be at the front of the line). There is no way TSMC or Samsung to bet them to NA-EUV pathfinding. If Intel blows their time on the "wrong" paths then eventually ASML will sell machines to the other two.

I suspect that has a better chance of being. "leading at lower volumes " (which is fine for more than a few Fab clients. Not Apple, but folks with higher priced silicon to sell ). Intel will probably never catch up on EUV fab capacity. But the next generation EUV tech starts out on roughly an even playing field again. That inflection point is an "in" for Intel. ( the previous two Intel CEO bonehead missed the EUV inflection point importance. )

Intel got an "endorsement" from. Qualcomm to put one product (not their entire line up) onto Intel's 20A . There is a chance Qualcomm is placing a bet in case. Samsung "gate all round" doesn't work out and TSMC can't do the volumes Qualcomm expects. In Monday's, Q&A after the presentation Intel mentioned that Qualcomm could be taking a "known" design and doing a shrink to Intel's 20A. So Intel could get a "tick-tock" progression but doesn't have to do both "halves" ( process shrink and new microarchitecture). That should help fab process go faster. If Intel foundry can find a client who wants to be on bleeding edge with a 50-80% smaller die than Intel's mainstream products then they can iterate on fab improvements without waiting around for the CPU package team to finish a large design.

That Intel 20A process could be 100% skipped by the CPU package folks if there is enough outside foundry customers to keep the EUV machines busy on EUV-20A mode. Intel takes all that learning and some very good NA-EUV path finding and applies that to 18A. That could end up largely closing the gap if there are no hiccups.

Intel has had several hiccups due to :

1. Only one primary customer (CPU packages) and a couple of "hostages" products ( Altera FGPA and modems made to use the process optimized for the CPUs. ).

2. Throwing out "tick tock" mindset (e.g,. manage the complexity by limiting things change at one time). The original stab at 10nm was trying to do a huge leap in one jump. Changed density , metal mixtures , patterning, and about 4-5 other things all at the same time. A series of yearly process of smaller jumps is simpler to manage the complexity due to less adverse interaction impacts.

3. "Premature optimization is the root of all evil " -- Knuth.
Intel doesn't have to be first on every dimension as soon as possible. They aren't shooting for most dense at any cost. The metric that they mentioned as their new baseline is Perf/watt. That means their wattage consumption may not be as low but if they get a bigger "bang for the buck" performance wise, then they'll take it.


A broader set of customers should help

1. Intel find a customer whose problem fits the fab process they do have working for now. For that customer they are a bleeding edge option.

2. if offer broader set of fab processes not all of them have to take larger leaps or get stuck.
( e.g., vaccine approach in 2020 in USA. Bet on 5 different vacinnes... at least one is a winner. Have to spend more to place more bets , but likelihood that they all loose goes down if try more concurrent options. Intel stop stock buy backs and invest in better fab process they could probably do alot if don't waste that amount of money. All the while 10nm was sucking wind , Intel was buying back billions. Get off of being 'stuck on stupid' and not surprising what can do. )

3. Spread the risk around to a broader set of libraries. Intel also mentions that they were going to try their "backside power deliver" on. Intel 4 or3 in a limited way before it was mainstream on Intel 20A. That again may be either a limited set of customer libraries or a limited subset of an intel design. Work out the kinks at a larger size and then roll it out at a smaller size on a broader scale.

"Make it work and then make it fast". There is not good reason to try all the new fab improvements in all of the libraries at the same time.

It won't be surprising if they end up with a fab process that isn't optimized for mobile products. Or if the fab biz lands a large enough mobile one then one of the alternatives is tweaked that way while the rest of the product line is on another with a different focus. ( If Intel has alternatives I doubt every single alternative will be bleeding edge best for every single market. )



Intel has historically got up and over promised. They may be over promising here too. But there is a difference this time in focus and risk mitigation. There are betting on some new major leaps. That their "gate all around" RibbonFET are more competitive. that they master the backside power delivery (Power Via) . That they work out all the kinks on EUV scale fabrication by the 3nd generation ( introduced at Intel 4 for several layers. More layers added at Intel 3 ) .

I would wager that in part they are pointing to "at risk" , early access to Intel 18A ( where they start up on NA-EUV) where they are talking parity more so than in extremely large scale production. That would be 2024 or so on their timeline. (but volume likely substantively later ).
 
No alternative modems? Samsung , MediaTek , and others sell modems. The major problems for those is that for CDMA-A/B ( Verizon and Sprint) there were no were near as good as Qualcomms offerings.
Of course Qualcomm is the best in modems. I'm saying that they have stagnated with SoCs as compared to Apple.

Here's a nice deep dive into the kind of company Qualcomm is (and the antitrust trouble they got involved with):
https://arstechnica.com/tech-policy...-the-cell-phone-industry-for-almost-20-years/
 
Qualcomm was doing custom Arm cores ( had architecture license ) early on. (if not before Apple got one).
Interesting. I had always assumed Apple had a license from being one of the partners of the original ARM startup in the 1990’s.
 
Interesting. I had always assumed Apple had a license from being one of the partners of the original ARM startup in the 1990’s.

Architectural licenses are for a specific architecture generation. Having an ARM arch v8 license doesn't mean get 9 , 10 , 11 , etc. and everything into the future forever. It just means get to do v8 designs forever. Highly likely, Apple had to write a check for v9 ( or are on a multiple year payment plan). Did they pay the same price as everyone else? maybe not.

Partner in starting up means you get stock in ARM... not rights to every intellectual property artifact produced forever. ARM is founded as a separate company. Apple may have owned a large share initially, but ARM was/is a full standalone company. It would fiscally , criminally irresponsible to just "give away the farm forever".

Early on, ARM wasn't doing intellectual property sales only revenue model. It was mostly design and contract make processors . The shift to design and hardly anything took a long while. Apple's start up relationship wouldn't have been to grab the IP. ( It wouldn't be surprisng if there was a "if company fails Apple gets first crack at buying IP" clause in there. Similar to a software escrow. )

Apple has dumped everything from the 32-bit era to origins in the current SoCs. Even if they had some original story rights.... that has all been flushed down the toilet at this point.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.