You forgot the 2nd generation 3D V-NAND (in future PCIe SSDs)!The perfect storm - IGZO, DDR4, Maxwell, Broadwell all nearly ready
You forgot the 2nd generation 3D V-NAND (in future PCIe SSDs)!The perfect storm - IGZO, DDR4, Maxwell, Broadwell all nearly ready
I don't get how computers are still using DDR3 when the PS4 has 8 GB of DDR5. Can anyone explain this to me? I'm not understanding.
What? this doesn't make any sense. The PS4 has GDDR5, not DDR5.
As for Maxwell, my suspicion is that because of the pretty significant GPU upgrade that's supposed to come with broadwell they might drop the dedicated GPU entirely. If they don't do this, then we'll probably see an upgrade to Maxwell as they usually upgrade the GPU's as well whenever they move on to a new CPU architecture.
I don't get how computers are still using DDR3 when the PS4 has 8 GB of DDR5. Can anyone explain this to me? I'm not understanding.
My suspicion is that we're probably going to see the next set of machines WITH Broadwell and possibly IGZO panels at some point next year. This year we won't be seing anything bigger than the refresh the Macbook Air got earlier this year.
As for Maxwell, my suspicion is that because of the pretty significant GPU upgrade that's supposed to come with broadwell they might drop the dedicated GPU entirely. If they don't do this, then we'll probably see an upgrade to Maxwell as they usually upgrade the GPU's as well whenever they move on to a new CPU architecture.
If Apple's going to be holding any more events this year the focus will obviously be on the iPhone 6, possibly a new iPad with some iPod news, and OSX 10.10 along with iOS8.
Where have you heard that the iGPU which comes with Broadwell is supposed to be a significant upgrade? The 850/860M is supposed to be a huge upgrade over the 750M, but I haven't heard the same about Haswell's Iris Pro versus Broadwell's iGPU.
The latest news are that Broadwell's GT3 is supposed to have 48 EUs (vs. 40 in Haswell), so that could translate to around 20-25% increase in computing performance. Of course, that does not matter that much for many practical purposes, since Iris is bandwidth-limited anyway, and I don't see how they can fix that.
Bandwidth limited? As far as i know, the eDRAM runs with the same speed as the iGPU. And the iGPU in 14 nm processors is faster, because those iGPUs generate less heat, which means that the iGPU can reach a higher average clock rate, compared to Haswell & Ivy Bridge.The latest news are that Broadwell's GT3 is supposed to have 48 EUs (vs. 40 in Haswell), so that could translate to around 20-25% increase in computing performance. Of course, that does not matter that much for many practical purposes, since Iris is bandwidth-limited anyway, and I don't see how they can fix that.
The igpu improvements are always over-hyped, and I doubt this round will be any different. They have made some very impressive improvements. As for Apple it's probably a matter of time. I can say that their discrete graphics implementations have been far more problematic than integrated graphics in recent years. The 2011s have a high failure rate. 2010 had a high failure rate. 2008 or 2009 (can't remember which) had a very very high failure rate. The combination of cpu + gpu at high load can also draw significantly more power than their chargers will supply. 2011 was the worst in that regard with the switch to quad core cpus and a fairly hot gpu.
Bandwidth limited? As far as i know, the eDRAM runs with the same speed as the iGPU. And the iGPU in 14 nm processors is faster, because those iGPUs generate less heat, which means that the iGPU can reach a higher average clock rate, compared to Haswell & Ivy Bridge.
The normal DDR3 RAM is also not a big limitation. Or do you call 12.8 GByte/s a bandwidth limitation?
You should know that bandwidth really isn't something you should blindly stare at as it's a limiting factor on when you're working with a few really big chunks of data like with video or 3D graphics. CPU's generally work on lots of small chunks of data and with data arranged like that, latency, which is the primary advantage of DDR over GDDR, provides a bigger advantage.
Like I already said, there's no point to having GDDR as CPU memory unless the CPU and GPU have to share the same memory pool (like the Xbox One, PS4 and WiiU) and you're doing the kind of work that GPU's do best ether way.
Considering we did get an improvement big enough for the dedicated GPU to become and optional extra on the 15" I really wouldn't be too surprised if the next leap is in the same class. That's at least what I've heard from people that I've met who actually work at Intel developing these chips and drivers for them.
As for unreliable dedicated chips there's been the GeForce 8600M (mid 2007 and early 2008) and Radeon 6XXX-series (early and late 2011), but the 320/330M in the 2010 models has not been suffering from a breakdown epidemic. With the 8600M the problem was caused by a manufacturing defect and in the Radeon 6XXX-series it was bad lead-free flux (which also killed a lot of early Xbox 360's).
I personally still own a mid 2007 Macbook Pro and it's been fixed once under the extended warranty for that. Not only that, I've also got an early 2011 machine, but that has yet to fall victim to Radeongate and genuinely hope it doesn't.