Intel plans to move up the launch of its 14-nanometer Coffee Lake processors, introducing them in August of 2017 instead of January 2018. According to DigiTimes, the launch is being moved up because of "increasing competition from AMD's Ryzen 7 and Ryzen 5 processors."
The site says Intel will release several K-series Core i3, i5, and i7 processors starting in August, along with its Z370 chipsets. Additional CPUs will come at the end of 2017 or early in 2018.
Intel also plans to unveil its Basin Falls platform, with Skylake-X and Kaby Lake-X processors at Computex 2017, which takes place from May 30 to June 3, which is two months earlier than originally scheduled.
![]()
Intel's Skylake-X series features 140W processors with 6, 8, and 10-core architectures, while Kaby Lake X-series features a 112W quad-core processor. Intel also plans to release a 12-core Skylake-X processor in August. Intel's Basin Falls platform could potentially be used in future Mac Pro machines and the rumored high-end server-grade iMac.
Coffee Lake chips appropriate for Apple machines were originally set to launch somewhere around the second quarter of 2018, so if rumors of Intel's updated timeline are true, the launch could be moved forward to either late 2017 or early in 2018.
Coffee Lake chips are manufactured on Intel's 14-nanometer process and will be the fourth processor family to use the architecture after Broadwell, Skylake, and Kaby Lake.
Apple is rumored to have new machines in the works for 2017, including new iMacs, which are likely to use Kaby Lake chips.
Article Link: Intel Rumored to Debut Basin Falls Platform in May, Launch Coffee Lake Chips in August
Something slipped at intel. They fell off their "Tick/Tock" cycle a few years ago with the Haswell/Broadwell procs and I think that's been causing them a lot of issues. Their innovation cycle was broken and they haven't been able to get back into it.
They don't need X-class, but may still pick up core count a bit without changing thermal power limits (TDP).
With desktop versions of gen 8 (CoffeeLake), mainstream i7 versions are suppose to pick up 6 cores. That appears to be a bit of Intel swapping integrated graphics processor (iGPU) space on the die for more x86 core space. ( they will only be GT2 graphics). That actually makes sense in the context that Ryzen 5 and Ryzen 7 are devoting no die space to iGPU at all. ( AMD integrated options are coming later in the year. Not sure Intel has an answer for those any time in next 9-10 months. )
The mainstream gen 8 should have an Xeon E3 equivalent that should also pick up the 6 core. E3 12xx v7 series. If Intel rapidly moved up the time table they may be able to shrink the that schedule also. So if the desktop moved approximately 5 moths back ( January '18 to August '17) then perhaps the Xeon E3 v7 moved back the same amount to land in October '17 ( March '18 to October '17). Xeon processors though usually get a higher amount of beta testing and defect quality control so Intel may not be able to claw back almost half a year.
E3 v6 would be a safer choice for Apple if shooting for a fixed timeline. if the E3 v6 to v7 timeline has seen a major disruption then Apple's E3 plans are probably screwed up from the original projected timeline. November-December has a decent chance to being more realistic if this upper edge shift to E3 is true. ( Not sure it is when the mainstream top end is shifting to 6 core. Apple could sell just 6 core sizzle as the hype tag line. But yeah a six core E3 v7 with 32-64GB of ECC ram would + an 5K display for less than most Mac Pro prices would draw in a decent number of "pro" market customers at the lower end of the Mac Pro market. ).
There is a new PCH chipset with gen 8 ( 300 series) which technically should not throw Apple's work in a loop if they were already deep into design using v6 (and/or Gen 7 KabyLabe ). Likewise the E3 v7 has similar chipset bump but Apple could just skip some of the more advanced features if caught off guard.
All of that said Intel knocking 5 months off the release timeline. That's is kind of dubious. The approximately 3 months ( a Quarter ) for Skylake-X is far more tractable. Intel probably had some slack in the schedule to get around any last minute significant bugs that might pop up. 5 months is suggestive of throwing something out there that isn't fully baked and vetted. Yes gen 8 is just an incremental change, but there is decent amount of new stuff in the chipset ( bump to USB 3.1 gen 2 among a few other things.)
[doublepost=1492793284][/doublepost]
Skylake-X <==> Xeon E5 1xxxx v5 ~ 140-160W TDP
mainstream KabyLake <===> Xeon E3 12xx v6 ~ 70-80W ( max out four cores )
mainstream CoffeeLake <===> Xeon E3 12xx v7 ~ 70-85W ( max out at 6 cores )
Xeon doesn't necessary mean the E5 class. There are four ranges of Xeon D , E3 , E5 , and E7. Not being limited to E5 leaves plenty of other options both up and down the TDP range.
The iMac is likely to select from the last two product equivalency classes above. Even if the enclosure is modified to get a bigger TDP envelope that would probably get consumed by a more "desktop" like GPU than in pushing on the CPU core count front.
What is awkward for the 21.5" iMac is the Intel 'retreat' on iGPUs with CoffeeLake. Apple would probably have to shift to dGPUs there too.
No thunderbolt then, also MacOS would need a lot of work done as its designed around a few powerful cores instead of several weaker ones.
Even though Intel has retreated on Iris Pro iGPUs (eDRAM) moving forward, remember that the 21.5" iMac (both non-Retina and Retina 4K) use Broadwell CPUs (5575R, 5675R and 5775R) currently.
I suspect Apple wants to get one more revision out of the current 21.5" and 27" chassis and they will end up using Core i5 and i7 Skylake CPUs (6585R, 6685R and 6785R for the 21.5" with i5-7500 and i7-7700K for the 27") when they update in them in the fall.
If Apple does that, that would buy them time to find a suitable discrete GPU for the next-generation 21.5" iMac and give it a decent swan song, specs-wise.
That's what the MacBook, MacBook Air and 13" MBP are for. Why gimp the 15" MBP, and eliminate the 17"? Oh wait, Apple already stated that they bugged up the Professional market.Wait whaaat? They're idiots because they make their computers thinner? I bet 99% of iMac users 1) do nothing that generate heat, and 2) want a thin sexy computer.
How about 8? or 6.. Please?Hell no. Apple can't even adequately cool a quad-core i7 in an iMac..
Apple are only updating the MacBook and the MacBook Pro. Therefore Im not shocked if each caters to 50% of customers.That's what the MacBook, MacBook Air and 13" MBP are for. Why gimp the 15" MBP, and eliminate the 17"? Oh wait, Apple already stated that they bugged up the Professional market.
How about 8? or 6.. Please?
The benches I saw had them doing great in 3d and multicore, most beating Intel. Not so good in single core. But just to F Intel I'd love to see Apple go AMD.
I think the BIOS problems are mostly overblown. There were some disasters, but it is largely an ongoing optimization process.
I am not sure Vega is late, I would say rather that people are impatient.
You can game very nicely on Ryzen. Freaks can buy half the cores for the same money if they want.
It's an entirely new architecture, it's going to have a little bit of growing pains. The BIOS screwiness has settled down with recent rounds of updates - Didn't you hear, it's a brand new architecture.
Gaming performances improves with the higher RAM speed, as the CCX communication also improves with higher clock speeds. Having an 8C/16T powerhouse, that can be easily overclocked with a couple clicks, for $320 is insane. Heck, intel was forced to upped their schedule in response to the pressure.
If not for thunderbolt, I'm sure Apple would love to use the Ryzen 6C/12T and/or 8C/16T CPU at 65W TDP.
Of course it matters if it's a new architecture or not. It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.It doesn't matter if it's a new arch. Apple doesn't want to deal with it at all.
The Ryzen 7 - 1700 comes has a base clock of 3.0GHz and can be overclocked, with one button push to 3.75GHz, quite easily and stably. A 25% overclock - and that is before upping RAM voltages, etc... People can easily get it going higher than that, yes with more power drain, like any higher overclocking. But 3.75GHZ/25%, with 1 button push.Except that the overclocking basically doesn't matter because every single Ryzen chip comes out of the box extremely close to it's clock limit. Most people can barely get them up to 4Ghz, and anything above that takes more voltage than AMD says is safe for day to day usage. Ryzen is a very poor overclocker.
Apple and Apple users don't care in the slightest about core count. 80-90% of Apple don't use more than 4 cores, and they'll see much more of a difference out of higher clock speeds and "IPC" improvements.
Isn't that what Grand Central Dispatch is for? Whatever happened to it… Is it not around in MacOS anymore?
https://en.wikipedia.org/wiki/Grand_Central_Dispatch
Of course it matters if it's a new architecture or not.
It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.
But the extreme backlash over the lack of "pro" machines in Apple lineup, show that some really do care about it. And when it comes to productivity, the more cores go a lot further for productivity, than higher clock speeds.
I was originally responding to the issues encountered at launch for Ryzen - Motherboard BIOS/EUFI, RAM compatibility, non-optimized applications, etc... and not about Ryzen being something other than x86-64Well, Ryzen is a different microarchitecture. Otherwise, it's x86-64 just like Intel Core. There are specific optimizations, but no recompilation is needed.
Didn't you just above talk about x86-64? So other than some optimizations, it should work A-OK. Apple did pretty well by getting OS X to run on ARM processors, slapping in touch UI and renaming it iOS, right?But that was in no small part because NeXTSTEP had been running on Intel even before the Apple-NeXT merger.
Sure, but then again, you don't need individual applications to parallelize perfectly, if you can run multiple VMs and applications, transcoding, etc... all at the same time. You know what allows that? More cores and more RAM. Amazing, ain't it?That depends a ton on the workload. Many, many workloads don't parallelize well.
I was originally responding to the issues encountered at launch for Ryzen - Motherboard BIOS/EUFI, RAM compatibility, non-optimized applications, etc... and not about Ryzen being something other than x86-64
Didn't you just above talk about x86-64? So other than some optimizations, it should work A-OK.
Apple did pretty well by getting OS X to run on ARM processors, slapping in touch UI and renaming it iOS, right?
Sure, but then again, you don't need individual applications to parallelize perfectly, if you can run multiple VMs and applications, transcoding, etc... all at the same time.
You know what allows that? More cores and more RAM. Amazing, ain't it?
It's work that 'Pros'(towards the upper end of the definition) can certainly have going on. Running multiple VMs alone, would gladly eat up the more cores and RAM.Which is hardly a workload most people will do.
Kick off an hour render job and NOT use your computer for anything else OR kick off an hour render AND use your computer?Yes, but last I checked, the human brain doesn't lend itself well to this sort of multitasking. How do more cores and more RAM help a single thing get done faster? Oftentimes, they don't.
Of course it matters if it's a new architecture or not. It takes a little time to get ramped up, but Apple sure as %^&$ ramps up for all contingencies. Apple has gotten OS X/MachOS running on all different kind of chips, to make sure they can pivot, if circumstances change. They had OS X running on Intel, WELL before 2006, for example.
The Ryzen 7 - 1700 comes has a base clock of 3.0GHz and can be overclocked, with one button push to 3.75GHz, quite easily and stably. A 25% overclock - and that is before upping RAM voltages, etc... People can easily get it going higher than that, yes with more power drain, like any higher overclocking. But 3.75GHZ/25%, with 1 button push.
So again, stating things that are absolutely incorrect.
You are correct, the ones who buy macbooks and 13" macbook pros, generally don't care about core counts. But the extreme backlash over the lack of "pro" machines in Apple lineup, show that some really do care about it. And when it comes to productivity, the more cores go a lot further for productivity, than higher clock speeds. I think you are confusing gaming(higher clock speeds) with productivity(more cores).
I bet it made you really mad, when Apple admitted they made mistakes when it came to the "Pro" market.
uCode issues at launch - WITH A NEW ARCHITECTURE. Not seeing any problems, 2 months after launch. I guess you missed the times in history, where AMD caught Intel with it's pants down. Here's another example of > quad core chips, at a reasonable price. Just because you are super angry with AMD for some reason, does not negate the quality > quad chip AMD released.You're completely missing the point. Why would Apple switch over to a Ryzen platform that's having uCode problems, has lower IPC but more cores, and comes with a reputation with consumers as being lower quality?
You would be flat out incorrect. The Ryzen 7 1700 and Ryzen 5 1600 are the sweet spots for AMD's recent releases. Both can be EASILY overclocked, with 1 button press, zero issues. Just because you refuse to accept it, does not negate the facts.So yes, Ryzen is an atrocious overclocker.
Your generalization about cores and clock speeds is ridiculously broad. It entirely depends on your workload. Practically everything you do in Photoshop is faster with a 7700K than it is with a 6900K. Hell, even the 6850K is faster than a 6900K in Photoshop. If you work with AutoCAD or Solidworks or practically any other CAD software, the 7700K will be significantly faster than both the 6900K and the 1800X. Render previews in 4K H.264 in Premiere Pro is faster with the 7700K than all previously mentioned CPUs. Exporting to 1080p H.264 from 4K TIFF and H.264 is faster on the 7700K than everything but the 6850K. Let's not even talk about data visualization in Python.
Productivity isn't clear cut. You can't make sweeping generalizations about it and claim any sort of accuracy. Here's one that is accurate and true though. I think you're confusing your fantasy land(/r/amd) with the real world(everywhere else).
I think part of it is the issues with shrinking the dies smaller and smaller. As someone else noted, we're starting to enter the realm where quantum mechanics are starting to rear their head. And these new FABs are massively expensive - many billions of dollars. With no real competition, Intel has been able to push out the new process enhancements to reduce their Capital Expenditures.
Now that AMD is looking competitive, Intel is going to have to spend the money (on FABs and Process R&D) again.
I'll wait until Intel goes 10nm or better. 10nm on phone makes a huge difference compared to, for example, 16nm. When the 16nm iPhone 7 Plus runs out of battery a 10nm phone still has 39% battery left with an even bigger 6.2" display.