Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In the past, there was always concern about AMD being able to deliver CPUs in sufficient volume, in a relatively short time-span. That's one reason why AMD never really got any business from Apple.
Steve had probably worked out a deal with Intel that gave them equal or better conditions than some of the much larger OEMs. Together with Intel's performance advantage, that pretty much sealed the deal.

Now, none of that seems to be true anymore. So we'll see what comes out of this.
 
It's the majority of the article, but here are some highlights:

The debut of Intel's 10nm process has been a particular sore spot, with the forthcoming Whiskey Lake set to be the fifth new architecture debut in the 14nm process. Prior to 14nm, Intel had maintained a two architecture, "tick-tock" strategy for its processors, where a new foundry node denoted a small architecture update over the previous processor as a "tick," and a more significant architectural evolution as a "tock" on a matured process.

We first reported on the demise of the tick-tock strategy in 2016. Things have only grown worse for Intel since then as 10nm has faced further delays. To put this delay in perspective, Intel's original roadmaps had 10nm technology debuting in 2015. There are several reasons for the delay, but Intel CEO Brian Krzanich explained that some features in Intel's 10nm process require up to five or six multi-pattern steps, whereas other competing foundries are known for up to four steps in 10nm or 7nm processes.

This development has consequences for Intel, its customers, and its competitors. First, Intel has lost the technology advantage it once held over the rest of the semiconductor industry. While you cannot compare the dimension in the node name directly across foundries, competitors such as TSMC, Samsung, and Global Foundries have largely reached parity with Intel's 10nm on 7nm processes, with transistor densities besting Intel's own at 10nm. Intel used the transistor density metric to combat the marketing furor that the node names created, but it seems to have lost those bragging rights as well.

More importantly, Intel's competitors are starting to enter volume manufacturing of competing 7nm nodes. While the technology leadership was only important to Intel before as an enabler for superior products, its relatively recent opening of fabs to outside customers has lost some of its luster as a result of these developments.

In the earnings call, Intel also acknowledged that it expects to cede marketshare to rival AMD, as its rival has enjoyed recent success thanks to the debut of new CPU architectures such as Zen that have begun to close the performance gap with Intel's own CPUs. AMD is expected to make significant gains in the server space thanks to recent developments, and after spinning off its own foundry into Global Foundries, has been using a mixture of the former in-house foundry and TSMC. AMD is expected to debut consumer products on the 7nm node in 2019.
 
  • Like
Reactions: Btaylor_prod
When you get down into the single digit nm it's pretty obvious Intel and TSMC are hitting the limits of silicon. We've already hit the gigahertz barrier, all there is left is to add more cores to the cpu and architecture improvements. I doubt CPUs and SOCs are going to get much faster.
 
  • Like
Reactions: eltoslightfoot
When you get down into the single digit nm it's pretty obvious Intel and TSMC are hitting the limits of silicon. We've already hit the gigahertz barrier, all there is left is to add more cores to the cpu and architecture improvements. I doubt CPUs and SOCs are going to get much faster.
What barrier? Can you be more specific?
 
Who knows what Apple has been doing with the ARM license. Apple is clearly ahead of the curve when designing processors.

But clearly, Apple wouldn't be patching iOS if their hardware isn't affected.
I would think it would be rare that they introduced such a bug even if they modified the design somewhat.

I would not be surprised by Apple applying an unnecessary patch because of user ignorance, paranoia, and litigation. And BTW, your iOS device just got slower!

I think Microsoft might have initially applied an unnecessary Meltdown patch to AMD.
 
When you get down into the single digit nm it's pretty obvious Intel and TSMC are hitting the limits of silicon. We've already hit the gigahertz barrier, all there is left is to add more cores to the cpu and architecture improvements. I doubt CPUs and SOCs are going to get much faster.

LOL. They’ve been saying that since at least 1992, when I started designing high-end CPUs. The end of the road keeps getting delayed, thanks to things like strained silicon, SOI, copper wiring, finfets, DFM, packaging improvements, MCMs, etc. There are many more cards left to be played - diagonal wires, HBTs/CML, III-V, etc.
 
Say Nintendo, Sony, Microsoft, and now Intel.

Perfect example proving my point. AMD does one processor for a game console maker every 5 years or something. Big difference between that and producing a range of processors every 18 months or so.
[doublepost=1525719546][/doublepost]
https://www.geek.com/chips/intel-predicts-10ghz-chips-by-2011-564808/

None of the Intel CPU's are going past 5ghz, unless they're overclocked with liquid cooling. How are they going to solve that problem?

You don’t WANT to go past 5ghz becuse Power = C V^2 f. Anything you can do to increase performance by doing something other than increasing f is a good idea. Especially when you consider that the easiest way to increase f is to increase V.
 

Intel (galaxy brain): can't have vulnerabilities if you can't make processors due to your process being defective!

Also, Meltdown & Spectre aren't real-world attacks. They're only theoretical attacks that haven't been exploited in the wild, due to their complex requirements.

The average consumer will never face the issue. It's tuned more for shared servers, since they rely on malicious code already being on a system.

People don't understand how irrelevant these attacks are.
 
Perfect example proving my point. AMD does one processor for a game console maker every 5 years or something. Big difference between that and producing a range of processors every 18 months or so.
Sometimes there's only one year gap. It's not AMD's fault if console cycles are long.
 
Since when? What console maker has updated their CPU twice in two years?

I was an AMD cpu designer for 9+ years. You really don’t want to go there.
One year MS, the next Nintendo.

One year Sony, the next MS.
 
No they shouldn’t. AMD is incapable of sustained success.

AMD is supplying Vega GPUs to Apple Already. AMD is also providing semi-custom CPU+GPU to Sony PS4 and Microsoft Xbox One.

Care to elaborate on how AMD can't supply whatever Apple may need?
 
  • Like
Reactions: iSilas and Ulfric
That’s not what I said. That’s different processors in the same generation - just diffferent SoCs. Big difference between that and what Apple needs to provide for its lineup and refresh on a regular basis.
AMD has been significantly refreshing their APUs basically every year, and they continue with Ryzen.
 
  • Like
Reactions: Ulfric
AMD is supplying Vega GPUs to Apple Already. AMD is also providing semi-custom CPU+GPU to Sony PS4 and Microsoft Xbox One.

Care to elaborate on how AMD can't supply whatever Apple may need?

The GPU team has nothing to do with the CPU team. All the way back to ATI, the GPU folks are used to delivering a range of GPUs at rapid intervals (mostly because they use ASIC design methodology, not custom design methodology). One has nothing to do with the other.
 
You don’t WANT to go past 5ghz becuse Power = C V^2 f. Anything you can do to increase performance by doing something other than increasing f is a good idea. Especially when you consider that the easiest way to increase f is to increase V.


Since you're an engineer, I have a question for you. Since GPUs are so much faster at certain tasks, such as cypto currency mining, 3D gaming and graphics design, why are we using DDR4 instead of Gddr5/5x, or HBM/2? Wouldn't a GDDR5 motherboard give a substantial improvement over DDR?
 
AMD has been significantly refreshing their APUs basically every year, and they continue with Ryzen.

Yes, and within a year or so they will be behind the game again. Just like when we were ahead of the game with Opteron/athlon 64/K8. How long did that last?
[doublepost=1525721036][/doublepost]
Since you're an engineer, I have a question for you. Since GPUs are so much faster at certain tasks, such as cypto currency mining, 3D gaming and graphics design, why are we using DDR4 instead of Gddr5/5x, or HBM/2? Wouldn't a GDDR5 motherboard give a substantial improvement over DDR?

It depends on the task. Certain tasks are bandwidth limited, others are latency limited. Some are sequential memory access, others random. The traditional desktop von neumann architecture optimizes for single threaded (or small number of threaded) tasks.
 
Yes, and within a year or so they will be behind the game again. Just like when we were ahead of the game with Opteron/athlon 64/K8. How long did that last?
They don't have to pay for their own fabs anymore. There will not necessarily be another Bulldozer.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.