Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't get how computers are still using DDR3 when the PS4 has 8 GB of DDR5. Can anyone explain this to me? I'm not understanding.

The PS4 uses GDDR5 memory, actual DDR5 doesn't even exist. The "G" stands for "Graphical" as it's a type generally used for GPU's and has been done so for years.

At it's core GDDR5 is exactly the same kind of memory as DDR3, except it sits behind a bus designed to prioritize bandwidth over access times. The reason for this is that GPU's generally run at lower clock frequencies, so data access speed is less important and GPU's generally operate on a few large chunks rather than lots of small chunks of data like CPU's.

In the PS4 this means that the GPU gets the kind of memory that's best for it while the CPU gets badly gimped by this type of memory being used. Remember seeing how badly Knack lagged whenever there were a lot of physics objects on screen at once? Well this is the probable culprit.
 
So it seems like no Broadwell, no 20nm Maxwell and no DDR4

How about a presentation involving IGZO and a whole bag of Apple adjectives: gorgeous, amazing etc

If there is no IGZO as well I think this years Macbook Pro presentation is going to require Circue Du Soleil onstage to distract from the lack of new technology.
 
My suspicion is that we're probably going to see the next set of machines WITH Broadwell and possibly IGZO panels at some point next year. This year we won't be seing anything bigger than the refresh the Macbook Air got earlier this year.

As for Maxwell, my suspicion is that because of the pretty significant GPU upgrade that's supposed to come with broadwell they might drop the dedicated GPU entirely. If they don't do this, then we'll probably see an upgrade to Maxwell as they usually upgrade the GPU's as well whenever they move on to a new CPU architecture.

If Apple's going to be holding any more events this year the focus will obviously be on the iPhone 6, possibly a new iPad with some iPod news, and OSX 10.10 along with iOS8.
 
As for Maxwell, my suspicion is that because of the pretty significant GPU upgrade that's supposed to come with broadwell they might drop the dedicated GPU entirely. If they don't do this, then we'll probably see an upgrade to Maxwell as they usually upgrade the GPU's as well whenever they move on to a new CPU architecture.

Where have you heard that the iGPU which comes with Broadwell is supposed to be a significant upgrade? The 850/860M is supposed to be a huge upgrade over the 750M, but I haven't heard the same about Haswell's Iris Pro versus Broadwell's iGPU.
 
I don't get how computers are still using DDR3 when the PS4 has 8 GB of DDR5. Can anyone explain this to me? I'm not understanding.

GDDR5 has been out for quite some time, and it has nothing to do with DDR4. GDDR5 was out long before the DDR4 specification was even finalized, so you're getting confused by naming conventions and nothing more. They're used for different things.

My suspicion is that we're probably going to see the next set of machines WITH Broadwell and possibly IGZO panels at some point next year. This year we won't be seing anything bigger than the refresh the Macbook Air got earlier this year.

As for Maxwell, my suspicion is that because of the pretty significant GPU upgrade that's supposed to come with broadwell they might drop the dedicated GPU entirely. If they don't do this, then we'll probably see an upgrade to Maxwell as they usually upgrade the GPU's as well whenever they move on to a new CPU architecture.

If Apple's going to be holding any more events this year the focus will obviously be on the iPhone 6, possibly a new iPad with some iPod news, and OSX 10.10 along with iOS8.

The igpu improvements are always over-hyped, and I doubt this round will be any different. They have made some very impressive improvements. As for Apple it's probably a matter of time. I can say that their discrete graphics implementations have been far more problematic than integrated graphics in recent years. The 2011s have a high failure rate. 2010 had a high failure rate. 2008 or 2009 (can't remember which) had a very very high failure rate. The combination of cpu + gpu at high load can also draw significantly more power than their chargers will supply. 2011 was the worst in that regard with the switch to quad core cpus and a fairly hot gpu.
 
Where have you heard that the iGPU which comes with Broadwell is supposed to be a significant upgrade? The 850/860M is supposed to be a huge upgrade over the 750M, but I haven't heard the same about Haswell's Iris Pro versus Broadwell's iGPU.

The latest news are that Broadwell's GT3 is supposed to have 48 EUs (vs. 40 in Haswell), so that could translate to around 20-25% increase in computing performance. Of course, that does not matter that much for many practical purposes, since Iris is bandwidth-limited anyway, and I don't see how they can fix that.
 
The latest news are that Broadwell's GT3 is supposed to have 48 EUs (vs. 40 in Haswell), so that could translate to around 20-25% increase in computing performance. Of course, that does not matter that much for many practical purposes, since Iris is bandwidth-limited anyway, and I don't see how they can fix that.

Yeah that's what I was thinking. The dGPU has a very high improvement rate, while the iGPU for broadwell's is very low. Therefore this would not be a reason at all for them to ditch the dGPU, and likely would be a reason for them not to (in the high-end model).
 
The latest news are that Broadwell's GT3 is supposed to have 48 EUs (vs. 40 in Haswell), so that could translate to around 20-25% increase in computing performance. Of course, that does not matter that much for many practical purposes, since Iris is bandwidth-limited anyway, and I don't see how they can fix that.
Bandwidth limited? As far as i know, the eDRAM runs with the same speed as the iGPU. And the iGPU in 14 nm processors is faster, because those iGPUs generate less heat, which means that the iGPU can reach a higher average clock rate, compared to Haswell & Ivy Bridge.

The normal DDR3 RAM is also not a big limitation. Or do you call 12.8 GByte/s a bandwidth limitation?
 
The igpu improvements are always over-hyped, and I doubt this round will be any different. They have made some very impressive improvements. As for Apple it's probably a matter of time. I can say that their discrete graphics implementations have been far more problematic than integrated graphics in recent years. The 2011s have a high failure rate. 2010 had a high failure rate. 2008 or 2009 (can't remember which) had a very very high failure rate. The combination of cpu + gpu at high load can also draw significantly more power than their chargers will supply. 2011 was the worst in that regard with the switch to quad core cpus and a fairly hot gpu.

Considering we did get an improvement big enough for the dedicated GPU to become and optional extra on the 15" I really wouldn't be too surprised if the next leap is in the same class. That's at least what I've heard from people that I've met who actually work at Intel developing these chips and drivers for them.

As for unreliable dedicated chips there's been the GeForce 8600M (mid 2007 and early 2008) and Radeon 6XXX-series (early and late 2011), but the 320/330M in the 2010 models has not been suffering from a breakdown epidemic. With the 8600M the problem was caused by a manufacturing defect and in the Radeon 6XXX-series it was bad lead-free flux (which also killed a lot of early Xbox 360's).

I personally still own a mid 2007 Macbook Pro and it's been fixed once under the extended warranty for that. Not only that, I've also got an early 2011 machine, but that has yet to fall victim to Radeongate and genuinely hope it doesn't.
 
Bandwidth limited? As far as i know, the eDRAM runs with the same speed as the iGPU. And the iGPU in 14 nm processors is faster, because those iGPUs generate less heat, which means that the iGPU can reach a higher average clock rate, compared to Haswell & Ivy Bridge.

The normal DDR3 RAM is also not a big limitation. Or do you call 12.8 GByte/s a bandwidth limitation?

DDR3 is actually 25.6GB/s, because its dual-channel. And yes, it is quite bandwidth limited compared to the mid-range GPUs equipped with GDDR5, which have bandwidth of 80GB/s or higher. The 128Mb of eDRAM with its 50GB/s helps out a lot to boost the iGPU performance, but the big difference in the memory bandwidth is the main reason why Iris Pro is behind in gaming despite having more computation capacity than, say, a 750M.
 
You should know that bandwidth really isn't something you should blindly stare at as it's a limiting factor on when you're working with a few really big chunks of data like with video or 3D graphics. CPU's generally work on lots of small chunks of data and with data arranged like that, latency, which is the primary advantage of DDR over GDDR, provides a bigger advantage.

Like I already said, there's no point to having GDDR as CPU memory unless the CPU and GPU have to share the same memory pool (like the Xbox One, PS4 and WiiU) and you're doing the kind of work that GPU's do best ether way.
 
You should know that bandwidth really isn't something you should blindly stare at as it's a limiting factor on when you're working with a few really big chunks of data like with video or 3D graphics. CPU's generally work on lots of small chunks of data and with data arranged like that, latency, which is the primary advantage of DDR over GDDR, provides a bigger advantage.

Like I already said, there's no point to having GDDR as CPU memory unless the CPU and GPU have to share the same memory pool (like the Xbox One, PS4 and WiiU) and you're doing the kind of work that GPU's do best ether way.

Yep you're right but Retrofire and Leman are talking about the iGPU and the associated memory which is 128MB of EDRAM and System Memory i.e DDR3. Which as you say is great for CPU type activities but not good when the iGPU is using it as VRAM where GDDR would be better.

I think this is a case of crossed wires with lots of conversations at once ;)
 
again, i feel like unless apple does a redesign, it doesn't make sense for apple to give up on the dgpu since teh slot exists and it's an integral part of its cooling. now if they redesigned it to be "thinner" that's a different picture, but apple rarely redesigns it's logic board and the battery life is already pretty good.
 
Considering we did get an improvement big enough for the dedicated GPU to become and optional extra on the 15" I really wouldn't be too surprised if the next leap is in the same class. That's at least what I've heard from people that I've met who actually work at Intel developing these chips and drivers for them.

As for unreliable dedicated chips there's been the GeForce 8600M (mid 2007 and early 2008) and Radeon 6XXX-series (early and late 2011), but the 320/330M in the 2010 models has not been suffering from a breakdown epidemic. With the 8600M the problem was caused by a manufacturing defect and in the Radeon 6XXX-series it was bad lead-free flux (which also killed a lot of early Xbox 360's).

I personally still own a mid 2007 Macbook Pro and it's been fixed once under the extended warranty for that. Not only that, I've also got an early 2011 machine, but that has yet to fall victim to Radeongate and genuinely hope it doesn't.

There was a repair program out on the 330m due to it being problematic. I also have a 2011 that is still going strong. I doubt they will have a repair program for them, given that the last one only extended to the 3 year mark. We're almost there now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.