OK. Lets recap a bit. What we know about 10nm Intel is that its every so slightly better in terms of performance, right? We have Anandtech and Toms reviews agreeing on that. Its not their typical full review due to the limitations Intel is putting on them, but its what we got. Intel might bias things a bit, but this isn't under reasonable dispute. (And all manufactures will try to put their products in the best light, so no need to single out Intel here.)
Do you have anything to offer up here other than reiterating my post to make it seem like you said it first?
Second, despite a move to 7nm AMD only narrowly can claim better performance per clock over 14nm Intel. 14nm Intel still has the best single threaded performance available (due absurd clock rates these days).
In single threaded applications, as I said. Again, you're reiterating what I said. In multi-threaded workloads the 3900X flies by the 9900K.
Skylake is EOL. Skylake is a rehash of 14nm from the original Broadwell release in 2014. 5 years of being on 14nm. Also, what "absurd clock rates" are you talking about? Were you not alive during the prime of Pentium 4 was pushing near 4 Ghz?
Also, I'm not sure if you're capable of reading what I've said twice now.
It has taken 9 years, this is one year before Sandy Bridge, to get where Intel is right now, even with all the 14nm refreshes and refinement. It has taken AMD 3 years to surpass Intel's 9900K processor in multi-threaded workloads, and fall a few percent lower than the 9900K in single threaded workloads. That is massive. AMD's total R&D budget is less than 1/3 of Intel's. AMD's processor design is radically different than Intel's. Intel made fun of AMD for going with IF and chiplets. Guess what? Intel plans on developing something like IF and using chiplets in future designs. Intel's current core connector tech isn't stable enough for massive core counts or higher frequencies. Sure 7 nm isn't too great for AMD at the moment, but again, what they have now versus 9 years of development. That close. That short amount of time.
They also have the best entire package of features, something they are unlikely to concede with things like TB3 on chip and continuing their gains in memory performance.
TB3 is open to whomever wants to use it, the caveat being Intel must inspect the hardware and code themselves. What features? AVX512 that is used by a small amount of software? Memory? If Intel wasn't worried, they wouldn't have increased core counts and base clocks on their hardware as a response to AMD.
. The point being, moving to this tiny process node hasn't lead to leaps and bounds of improvements like moving from 45nm to 32nm did back in the day.
So this isn't exactly a "tiny process node." Process sizes isn't relative to size. It hasn't in many years. Intel's 10nm is roughly the same size and density as TSMC's 7nm. You also gloss over the major fact that Westmere, Nehalem's successor utilized better instruction sets and revamped processor design, not merely a die shrink.
It didn't for AMD and also hasn't for Intel. Today, with everything going mobile, its more about feature set in a given power envelope on the consumer side.
Except you've got this wrong. So wrong. 10nm Intel is the same size and density as TSMC's 7nm which AMD uses. Whatever number followed by 'nm' hasn't been a true indicator of die size or density in a number of years. Intel has refined 14nm over 5 years now. When Intel set out to do 10nm, they took a density increase that they hadn't before and didn't realize the repercussions of until they were several years down the pipeline. Intel began working on 10nm around 8 years ago. Even Intel's CEO has stated they took a major misstep.
https://www.pcgamer.com/intel-says-it-was-too-aggressive-pursuing-10nm-will-have-7nm-chips-in-2021/
All this BS about the best gaming CPU is a tiny market. And best performing consumer, desktop CPU, but its a 12 core monster? Really. What percentage of folks need that.
Who said anything about gaming and only gaming? What do you define as average? People who putt around on their MBPs? Do people have to continuously use low core count products if they're content creators? If they need to speed through equations? Render models?
It's like me saying "Do most people on MR need MacBook Pros when the most professional thing they do is write a Word document or edit a few photos they took on their iPhone?"
Its like the Mustang or Corvette of cars. Great you have the best sports car, but its the midsize sedans/SUVs and midsize trucks that make the money.
The Mustang and Corvette are not even in the same class of vehicles. You could label them as sports cars, but the Mustang has been watered down. If you're comparing the GT350 to a Corvette ZR1, then you might have a point here. Though if you want to be anal about it, compare the GT350 to a Camaro ZL1.
These things push electrons in review articles because that's prosumer/gamers read this stuff, but the masses care about the complete package and don't even know if its an i3 or i7 or Zen2 under the hood. That's also what wins products. Newegg or Amazon sales of individual CPUs are one thing, winning the MacBook Pro or Dell XPS is another.
Fine. Intel supplies Dell and HP with high core count mobile Xeons which offer better long term durability and use case than Apple's consumer line of chips in their "pro" laptops. Additionally, if you can't tell the performance difference simply by using a mobile i3, i7 or i9 processor, then there's something wrong with you. And again, there is no mobile Zen2 processor right now.
[doublepost=1564806215][/doublepost]
Anyway, it will be interesting when AMD and Intel can compete in the same power envelope.
Here's the reality. TDP is not observant and never will be observant of pure power usage. At idle, a 9900K might use 25 watts of power as it downclocks. At full load and engaging 5 Ghz on multiple cores, it could very well be using upwards of 220 watts draw power. The 9900K is rated at 90 watts TDP.
The Ivy E processor in my current workstation is rated at 130 watts TDP. Most tests have found it to draw close to 300 watts at near full load at stock. My processor is overclocked. When it comes to Intel, their TDP is a note of what kind of HSF you should use to remove the heat most efficiently.
I most certainly welcome competition from AMD. I own Intel stock and am kicking myself for not diversifying into AMD a couple years ago when I think we all knew AMD was on the rise. But at the same time, I am decreasing my exposure to Intel. Some of that is broader market reasons (and why I'm not buying AMD now), but some is long term concerns over design wins and their move away from ultra portables and mobile chips.
AMD isn't moving away from mobile chips. They're lax about the market because Intel has a massive foothold there. AMD's concern is enterprise 1st, consumer desktop 2nd.
ARM is going to eat at them from one side, AMD from another, and they are selling off some of what remains (like their 5G tech to Apple). So data center and laptops are the note worthy stuff. Eh, time to take the profits and run. So, believe me, I'm not a huge Intel fan boy here. Just calling facts the way I see them.
Intel doesn't care about ARM. They did roughly 10 years ago and because of a bad CEO their plans were purposefully delayed and by the time they got in, it was already late. Though not in the "late" you're thinking of. Late as in someone with an ego didn't want to follow in another person's footsteps. They wanted to make a name for themselves. It's been rumored for over a decade that Intel could have been producing the first few ARM processors in iPhones. ARM themselves don't directly supply OEMS. Third parties that license ARM designs do. It's why Apple looks to TSMC to build their licensed derivative of an ARM processor.
And I never accused you of being an Intel fanboy. Misinformed, sure. I haven't bought an AMD processor in nearly 20 years, so I'm not AMD fanboy either. I keep myself informed.
I posted this a few weeks ago. It should give you an idea of how bad Intel was maneuvering things over a decade ago due to bad management.
No idea. I wonder if it'll be published with a filing? The issue is that Intel wasted a lot of time and money under Krzanich, but Intel's problems began not long after Core came out in 2006 when they began to shift focus on side projects that weren't their core services, like enterprise and consumer products that were their core products. The issue isn't that they were late, it's that the CEO in the early 2000s laid the ground work for the future of the company when it came to mobile use, but the guy who came after him froze that and dicked around. That guy struck a deal with Apple for Core processors. Also the same guy who later realized how much of a dumbass he was for underestimating how fast the mobile market grew because Apple paved the way. By the time Intel did begin to gain focus on mobile, it was late. Companies had already established themselves in R&D and contracts. The theory is that Intel would have been a close Apple partner for their iPhones for a considerable amount of time, and because of holding the patents, there may have been native x86-64 running alongside Intel's then ARM processors. Krzanich replaced Otellini and made changes to further the company, but everything was late by then, and Krzanich wasn't wise when it came to direction, other than banging a board-director. Dumped money into fruitless projects. Intel's other divisions have made some grave errors over-estimating their own ability.
They're a rich company. They'll survive. AMD just released Zen 2 which are powerhouse processors when it comes to multi-threaded applications. But in true AMD form, they've flubbed the launch like every other product launch they've done. It'll take months to fix.