Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don’t think Apple ever publicly complained about Intel’s performance. And, Apple used their hardware, they had no choice to as their hardware ran on it. It‘s just that what Intel provided didn’t perform anything like what Intel had promised.

...on notebooks that’s true. Intel on desktop is great (AMD is great on desktop too and has kept Intel prices in check). Intel also currently runs essentially any software available (other than iOS apps)
 
Apple has long lost their right to complain about Intel's raw performance. The hardware has always been there, Apple just decided not to use it.

It’s more than intel chips didn’t allow Apple to make the computers they wanted to make.

Which is why Apple is going with their own processors now. These chips should, amongst other things, be more lower efficient and generate less heat, which in turn allows for thinner and lighter form factors without the thermal throttling problems the earlier MacBooks suffered from.

Performance should be superior as well. The advantage here is that Apple can optimise the hardware for their own OS and software without having to worry about the rest of the industry.

Can’t wait to see what Apple can accomplish with them.
 
  • Like
Reactions: Marlon DLTH :)
Move over, Intel. You're the 2020 version of the PowerPC. Can't wait for ARM. Typing this on a 16" MBP with blaring fans just because an external monitor is plugged in. :rolleyes:

Now if that doesn't happen with my windows laptop... I am going to say its apple's fault on that one kid.

I also have a macbook pro 16 inch, doesn't happen to me, so I am going to say its your fault kid.
 
But you CAN blame Intel when Apple says, “Hey, this is what I’m coming out with over the next couple years, can your processor handle that” and Intel says, “OH SURE!” but then goes, “Ummmm, so we didn’t quite hit the mark, we overshot power consumption by like 20 watts”. Which is not far from what happened.


For their mobile systems, the ones that make up 80% of what they sell, there’s been a few times where Apple introduces a product that’s using an unspecified Intel processor that shows up in Intel’s database AFTER the release.


Bootcamp was a thing primarily because at the hardware level, the systems were VERY similar at as they were both using Intel compatible chipsets/motherboards. That’s not the case for ARM. If the architecture put together by Microsoft is not structurally the same as the one put together by Apple (which it’s probably not) then Bootcamp becomes much less of a certainty.

Apple has long lost their right to complain about Intel's raw performance. The hardware has always been there, Apple just decided not to use it.
 
TSMC currently has a higher value than Intel does by about a hundred billion dollars.

As of now, Intel's market cap USD 214 billion, while TSMC is worth USD 368 billion.

Dear Intel, buy ARM and TSMC if possible.

It seems that Arm is for sale, and that NVIDIA is interested in buying, Forbes (among others) is reporting (https://www.forbes.com/sites/daveal...t-could-upend-the-chip-industry/#2f231ed35742).

Just understand that Apple is hostage to TSMC and their process.
TSMC 7nm = Intel 10nm
Apple doesn't own a FAB.

I suppose Apple will not be a hostage to TSMC. Apple will use TSMC process for the time being, but it still has its own architecture. It could switch to Samsung, for instance, if it TSMC fails to deliver. By using Intel, Apple was tied to Intel's schedule.

Wow they’re finished. Never thought I’d see Intel become an endangered species, but this seems to be it.

Intel is definitely losing ground, in so many ways. Apple departing from Intel was expected, and it is not the end of the world. Plus, it is more a consequence of the troubles Intel is facing than a cause.

The problem is everything else. Intel failed to deliver 10nm processors within schedule and now it is failing again with the 7nm process. Intel's 10nm may be as dense as TSMC's 7nm or even denser, but Intel used to be ahead of everyone else a few years ago, and now it is not.

Last year, Intel had production issues and was not able to deliver as many 14nm processors as planned, leading many consumers to buy AMD instead. AMD's Ryzen is up to the expectations, and superior to its previous processors. As AMD is offering more cores and threads at a lower price with a similar performance, that pushes Intel steep prices down.

Intel has been unable to produce decent GPUs. Back in 2009, Intel cancelled its Larrabee project, which was supposed to result in a graphics card to rival NVIDIA and AMD's ATI. But Larrabee was a failure and Intel stick to low-end integrated graphics solutions, which kept evolving over the years but was not on par with competitors. AMD ran circles around Intel with better integrated graphics, and Intel was not even a contender in high-end video cards.

And then there is Qualcomm. Back in 2017, Intel pathetically threatened Qualcomm with legal action when Microsoft announced Windows Arm devices (https://www.forbes.com/sites/tirias...and-qualcomm-over-x86-emulation/#a108f0a54f43). It became clear to the world that Qualcomm could eat Intel's business, and that was the reason Intel was willing to start a legal battle. And, while Qualcomm may not yet be able to deliver processors to power great Windows machines, it is on the way. Qualcomm processors will likely deliver good performance while being more energy-efficient than Intel's, which is a gold mine for laptop manufacturers. Just like Apple did, many other computer manufacturers will switch to Arm in the future.

And finally, there is NVIDIA. NVIDIA offers great performance in its GPUs. It has a market cap of USD 250 billion, above Intel as well. And it is willing to buy Arm, which could lead it into the CPU market as well.

So, Intel is set to lose the battle to TSMC in semiconductor processes, to AMD/NVIDIA in desktops and high performance computing, and to Qualcomm/Arm in mobile processors. It is getting harder and harder to recover.
 
Apple has long lost their right to complain about Intel's raw performance. The hardware has always been there, Apple just decided not to use it.
I don’t think Apple ever publicly complained about Intel’s performance. And, Apple used their hardware, they had no choice to as their hardware ran on it. It‘s just that what Intel provided didn’t perform anything like what Intel had promis... wait. Is it Groundhog Day?
 
Wow to think x86 may actually have reached a limit

x86 refers to a group of instruction set architectures, not hardware architectures. I say a group because you can have x86 without certain parts of it. I doubt anyone is compiling mmx these days. Instruction sets don't really reach a limit. They could be unsuitable for a particular hardware design though.

ARM also refers to a group of instruction sets, which are published and licensed by ARM. They are arguably prettier in my opinion. They don't provide versions of everything with a memory argument as one operand, or at least they didn't the last time I checked. Instead they provide more register names. This approach is arguably simpler for an optimizer, as it requires a smaller number of opcodes and the compiler doesn't have to handle explicit folding.

In spite of the advantages I mention, instruction sets don't hit a limit. They just fail to meet the underlying requirements of the architecture. Your complaint seems to be with Intel's architecture, not x86. The only real coupling there is that Intel is the main driver of new opcodes, and they pick stuff to suit their hardware.
 
  • Like
Reactions: T'hain Esh Kelch
Is that rule of thumb even true any more? I seem to remember, though can't find it again now, by one account Cannon Lake (and presumably Ice Lake too) is basically Kaby Lake shrunk to a ~10nm process, with little if any density increase over 14nm at all. Intel just wanted to get something out there they could brand as 10nm, but it bears very little resemblance to their original plans.

of course not. it was bs spread by intel marketing/PR

TSMC 10nm actually yields enough to produce hundreds of millions of ios devices. ditto for TSMC 7nm

intel? 10nm barely yields, 4 years late and intel 7nm doesn’t even exist yet.

intel can claim their process is equivalent to whatever, until it actually ships working parts, in volume, it’s trash.
 
  • Like
Reactions: Falhófnir
I am however curious why AMD was not considered. Their Ryzen platform is phenomenal! I've just set up a Plex server with a Ryzen 5 3600 and it rips through multiple 4K encoding streams with ease.

I suspect they were. There’s been references to AMD chips in macOS in beta builds for Catalina.

The catch is that AMD hasn’t exactly been as phenomenal when it comes to landing the products. Some issues plagued the 3000-series at launch, or may still impact them today. These are all things that I encountered with my gaming PC I built around the launch of the 3600:
  • There were issues with 3000-series chips not reaching advertised boost clocks. AMD had to issue a microcode fix.
  • There were issues with the 3000-series chips boosting under very low loads because it was set to be too sensitive.
  • AMD shipped a broken rand instruction that had to be fixed by microcode. It didn’t break a lot of software, but it just looks very sloppy.
  • AMD’s turnaround on these issues wound up being on the order of 1-2 months to resolve the day one issues.
  • The 3600 in my case after all the fixes still idles at higher temps than a 2600 I also had and seems to boost with Windows doing not a whole lot.
The chips themselves are good performers, but when Apple’s likely upset with Intel’s QC, and wanting more power efficient chips... the 3000-series isn’t a great counter-example. If Apple was getting engineering samples from AMD, they may have made the final call which way to go based on that experience.

Enthusiasts are more willing to put up with this sort of thing to get cheaper, faster chips. But I can’t imagine Apple doing that.
 
I suspect they were. There’s been references to AMD chips in macOS in beta builds for Catalina.

AMD CPUs or just GPUs? My guess is that Apple has long wanted Macs to have in-house designs, and that any consideration of AMD would have been for negotiating purposes, or as a fallback in case there were more delays to Intel 10nm before their own chips were ready. Since AMD is fabless, it still requires using TSMC or another fab to make the actual chips.The issue with Intel isn’t the ISA or CPU designs per se, but rather with their struggles with the 10nm and now 7nm processes. Their 14nm process was arguably the best in the industry at the time, but competitors caught up and surpassed them, most notably TSMC, who doesn’t design chips.

Once Apple made the decision to design their own chips, going with the ARM ISA made the most sense. Neither Intel nor AMD can license the x86 ISA to anyone else, or put themselves up for sale without the consent of the other or voiding their cross licensing agreement. So retaining x86 wasn’t a realistic option. RISC-V is theoretically a possibility, but they already have lots of experience designing chips for the ARM ISA, so it makes the most sense to stick with something they know, and for which there is a relatively large developer base (albeit mobile-oriented).
 
  • Like
Reactions: M3gatron
As AMD is offering more cores and threads at a lower price with a similar performance, that pushes Intel steep prices down.

to be clear though, intel desktops are faster than amd, until you have an application using more than 8-10 cores (which not many really take advantage of yet). I have only been buying amd lately (2 x 3900x and a 3700x) but Intel makes “the best” desktop chips if you measure on most relevant performance. in addition software has been optimized generally for Intel even before raw power is taken into account.

I’m excited to see if Apple beats intel in absolute performance (though sadly it won’t be relevant for me since I only use windows software in my work)

Intel will also be bringing big and little cores to Windows.

How much of their power and heat inefficiency will offset this, and when and if they stay relevant is an unknown that clearly is worrying investors (raw processing power has largely had diminishing returns over the past 5+ years which is part of what allowed intel to be lazy?).

for myself, virtually all computing latency is network related and then benefit of cpu/gpu progress is portability and battery life.
 
Last edited:
...

intel? 10nm barely yields, 4 years late and intel 7nm doesn’t even exist yet.


10nm parts are in the several high volume laptop models at this point. ( including the MBP 13" four port 2020 model).
10nm is in the many millions zone. That is past "barely".

7nm has to exist for there to be a root cause diagnosable problem with it. The notion that it doesn't exist unless can buy it at Best Buy or out of the Fry's discount cast aside bins is a gross exaggeration.

Intel's 14nm was about 6-12 months late also. If 10nm is 4 years late and 7nm is back to the same slack as the 14nm then that is actually progress in the big picture.

Intel is probably going to ship 7nm in late 2021. Just not in large volume for the mainstream market. High end GPU's ( Xe-HPC Ponte Vecchio ) and perhaps some other relatively very high priced FPGA models. High unit prices means lower yields aren't a 'show stopper' for those products. TSMC's 'at risk' production starts many months before the 'high volume' production. Just because in 'at risk' mode doesn't mean that mode "doesn't exist yet".

The bigger problem is Intel not being truthful (or at least transparently setting expectations ) . As late as December 2019 their story to customers and analyst was 7nm is on track. And then it isn't. If they had said "7nm has some early stage issues to work through, but no show stoppers so far" then this 6 month shift would have gone down much better. So when Intel says they have the root cause diagnosed and are rolling out the solution then that gets read as "doomed , failure , meltdown mode and horribly broken".


intel can claim their process is equivalent to whatever, until it actually ships working parts, in volume, it’s trash.

7nm is shipping about when they said it would. The gap here is when high volume shipping would kick in.
If Intel wasn't so deeply wedded to 40-50% mark up margins they could probably shift to high volume sooner ( get some of that delay back.). There is a greed portion here too where Intel execs just don't want the stock price to go down even when there is a screw up. Sometimes just have to take some lumps to get back in the game on competitive footing. [ a higher mark up for a fab process that is better than everyone else's is something they could perhaps make a 'value add' case for. higher mark up for a fab process that is in 2nd or 3rd place ... not so much. ]
 
10nm parts are in the several high volume laptop models at this point. ( including the MBP 13" four port 2020 model).
10nm is in the many millions zone. That is past "barely".

7nm has to exist for there to be a root cause diagnosable problem with it. The notion that it doesn't exist unless can buy it at Best Buy or out of the Fry's discount cast aside bins is a gross exaggeration.

Intel's 14nm was about 6-12 months late also. If 10nm is 4 years late and 7nm is back to the same slack as the 14nm then that is actually progress in the big picture.

Intel is probably going to ship 7nm in late 2021. Just not in large volume for the mainstream market. High end GPU's ( Xe-HPC Ponte Vecchio ) and perhaps some other relatively very high priced FPGA models. High unit prices means lower yields aren't a 'show stopper' for those products. TSMC's 'at risk' production starts many months before the 'high volume' production. Just because in 'at risk' mode doesn't mean that mode "doesn't exist yet".

The bigger problem is Intel not being truthful (or at least transparently setting expectations ) . As late as December 2019 their story to customers and analyst was 7nm is on track. And then it isn't. If they had said "7nm has some early stage issues to work through, but no show stoppers so far" then this 6 month shift would have gone down much better. So when Intel says they have the root cause diagnosed and are rolling out the solution then that gets read as "doomed , failure , meltdown mode and horribly broken".




7nm is shipping about when they said it would. The gap here is when high volume shipping would kick in.
If Intel wasn't so deeply wedded to 40-50% mark up margins they could probably shift to high volume sooner ( get some of that delay back.). There is a greed portion here too where Intel execs just don't want the stock price to go down even when there is a screw up. Sometimes just have to take some lumps to get back in the game on competitive footing. [ a higher mark up for a fab process that is better than everyone else's is something they could perhaps make a 'value add' case for. higher mark up for a fab process that is in 2nd or 3rd place ... not so much. ]
Realistically Intel still haven't actually released the 10nm chips they aimed to back in c.2014 though. They've released some 10nm chips (only in the U and Y segments) but they struggle to match their 14nm chips in CPU performance and try to gloss over that by finally playing catchup on the iGPU. Whether Tiger Lake will finally realise something like what Intel wanted to release remains to be seen. It looks promising-ish, though a lot of focus has been put on the GPU again, so whether that is more makeup to cover mediocre CPU performance, or the icing on a finally really decent 10nm release remains to be seen.
 
Realistically Intel still haven't actually released the 10nm chips they aimed to back in c.2014 though. They've released some 10nm chips (only in the U and Y segments) but they struggle to match their 14nm chips in CPU performance and try to gloss over that by finally playing catchup on the iGPU. Whether Tiger Lake will finally realise something like what Intel wanted to release remains to be seen. It looks promising-ish, though a lot of focus has been put on the GPU again, so whether that is more makeup to cover mediocre CPU performance, or the icing on a finally really decent 10nm release remains to be seen.

Yup. Early Xe results look interesting, but could also be a distraction from mediocre CPU performance.
 
  • Like
Reactions: smulji
No problem... That delay should give Apple a good start. They need all the help they can get.
 
I don’t think Apple ever publicly complained about Intel’s performance. And, Apple used their hardware, they had no choice to as their hardware ran on it. It‘s just that what Intel provided didn’t perform anything like what Intel had promis... wait. Is it Groundhog Day?

You are not imagining stuff. It’s the second time he has posted the same response and he even ignored the first time I responded to this, so I am not going to bother typing out the same stuff all over again.
 
Intel barely produces any meaningful improvement in CPU for literally 5 years and that's not acceptable in every way.

Intel doesn't view great innovation as part of its future CPU roadmap and it's no surprised that the client will be forced to make a transition.
 
Many people here are confused about the difference between Intel (IDM) and TSMC (foundry).
TSMC makes many incremental process improvements because their process development is not tied with any particular chip design timelines. This does not make sense for Intel. They upgrade their process in greater increments. Combined with the fact that after introduction of finfet transistors (Intel at 22nm) the 'nm' in process name lost any connection with the process feature sizes, the fact that TSMC updates their process in many incremental steps (while still using 'nm' naming convention) led to misleading gap between TSMC and Intel processes. TSMC does have some lead there but it's not as big as the process names suggest. Intel could easily jump ahead in this game by naming their next process 1 nm but they won't because they do not compete with TSMC for foundry customers and thus they don’t care.
 
I suspect they were. There’s been references to AMD chips in macOS in beta builds for Catalina.

The catch is that AMD hasn’t exactly been as phenomenal when it comes to landing the products. Some issues plagued the 3000-series at launch, or may still impact them today. These are all things that I encountered with my gaming PC I built around the launch of the 3600:
  • There were issues with 3000-series chips not reaching advertised boost clocks. AMD had to issue a microcode fix.
  • There were issues with the 3000-series chips boosting under very low loads because it was set to be too sensitive.
  • AMD shipped a broken rand instruction that had to be fixed by microcode. It didn’t break a lot of software, but it just looks very sloppy.
  • AMD’s turnaround on these issues wound up being on the order of 1-2 months to resolve the day one issues.
  • The 3600 in my case after all the fixes still idles at higher temps than a 2600 I also had and seems to boost with Windows doing not a whole lot.
The chips themselves are good performers, but when Apple’s likely upset with Intel’s QC, and wanting more power efficient chips... the 3000-series isn’t a great counter-example. If Apple was getting engineering samples from AMD, they may have made the final call which way to go based on that experience.

Enthusiasts are more willing to put up with this sort of thing to get cheaper, faster chips. But I can’t imagine Apple doing that.

Switching to AMD also would have upset their relationship with Intel, and Intel likes to play favorites. That could mean that switching from Intel to AMD was a one way street, and would just be delaying the inevitable. Eventually Apple would have to transition again to something else-- so may as well take that step today.

I'm looking forward to seeing where we are in a year. Did Apple do this solely to own their own destiny, or are we going to find that Apple Silicon also significantly outperforms AMD/Intel?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.