Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hmm I wonder if they could have Iris Pro in the 13".

It seems the 5200 is just a 5100 + the eDRAM. I'm not sure but I don't think the eDRAM would generate much heat but it does add cost. I suspect its the cost that keeps it in the high end quad core chips only. I think a mid level GPU i.e Iris Pro on a mid level CPU like the i5 in the 13" would be winning.

Iris 5100 = 40 execution units @ 200-1200Mhz
Iris 5200 = 40 execution units @ 200-1300Mhz + eDRAM

The eDRAM chip alone can take around 4.5W, and don't forget the increased memory consumption of the quad core CPU it accompanies.
 
The eDRAM chip alone can take around 4.5W, and don't forget the increased memory consumption of the quad core CPU it accompanies.

4.5w TDP or power draw?

What do you mean by?
"the increased memory consumption of the quad core CPU it accompanies"
 
4.5w TDP or power draw?

That is how much power it uses AFAIK. I still don't know how CPU makers compute their TDP, it seems to be somewhere around 1/2 of max power consumption. At any rate, 4W is quite significant for a 13" laptop...

What do you mean by?
"the increased memory consumption of the quad core CPU it accompanies"

Sorry, it was supposed to be 'power consumption'. I don't know why Intel only offers Iris Pro on quad-core CPUs — for all we know, there can also be a technical limitation that prevents it from being used with dual-cores...
 
I don't think it's that much about power consumption considering Apple did use Nvidia's GPU+chipset combinations until Intel put a stop to it and decided that Nvidia's chipset license was no longer valid.

In my opinion it's mainly about cost because those 128MB of eDRAM do take up quite a lot of space. Increasing the size of the chip decreases the number of chips Intel can make per wafer and thus they have to charge more for that. If you've ever looked at the layout of modern server, desktop and laptop chips with L3 cache you'll see that cache memory takes up a considerable amount of space.

Sure, DRAM does use fewer components per bit of data (one transistor and a capacitor as opposed to three transistors in SRAM), but you rarely see SRAM in amounts over a dozen MB. The Xbox One has 32MB, but that takes up a considerable amount of die space (which is why the Xbox One has only "2/3 of the PS4's GPU").
 
I don't think it's that much about power consumption considering Apple did use Nvidia's GPU+chipset combinations until Intel put a stop to it and decided that Nvidia's chipset license was no longer valid.

The only Nvidia GPUs ever used in the 13" Apple laptops are the integrated ones. Their power requirements were quite similar to any other integrated solution on the market of that time (say, the Intel's own G45). Of course, the Nvidia integrated was miles ahead of Intel. Not anymore though.
 
I can't see a dGPU being added and I don't really think it's needed in thin & light machines. It would chew into the battery life, make the machine run ridiculously hot and bring with it the inevitable driver issues.

I can see why some people want it, but I'm more than happy with the integrated solution in my rMBP.

I'd like to see Intel keep improving the integrated solution of course to please those who want to game for example, but even right now Iris & Iris Pro are capable of semi-decent gaming experiences.
 
The only Nvidia GPUs ever used in the 13" Apple laptops are the integrated ones. Their power requirements were quite similar to any other integrated solution on the market of that time (say, the Intel's own G45). Of course, the Nvidia integrated was miles ahead of Intel. Not anymore though.

They weren't integrated by the strictest meaning of the word considering they sat in tadem with the chipset, not in tandem with the CPU like they do today. You could argue that it was a chipset integrated into a GPU rather than the other way around.

Performance wise they pretty much wiped the floor with the solutions Intel was offering at the time. Intel didn't release at least half decent integrated solutions until the HD4000-series, which came out after Apple had to dump Nvidia's chipsets.
 
They weren't integrated by the strictest meaning of the word considering they sat in tadem with the chipset, not in tandem with the CPU like they do today. You could argue that it was a chipset integrated into a GPU rather than the other way around.

Performance wise they pretty much wiped the floor with the solutions Intel was offering at the time. Intel didn't release at least half decent integrated solutions until the HD4000-series, which came out after Apple had to dump Nvidia's chipsets.

Apple dumped NVIDIA's iGPUs in 2011 by using the Intel HD 3000 in the Sandy Bridge Macs.
 
They weren't integrated by the strictest meaning of the word considering they sat in tadem with the chipset, not in tandem with the CPU like they do today. You could argue that it was a chipset integrated into a GPU rather than the other way around.

Erm. No you couldn't. Yeah, the GPU was physically bigger, but no chipset = no communications bus.
 
They weren't integrated by the strictest meaning of the word considering they sat in tadem with the chipset, not in tandem with the CPU like they do today. You could argue that it was a chipset integrated into a GPU rather than the other way around.

'Integrated' does not mean part of the CPU, it means that the system in question is its own physical unit, with its own interface. GPUs which are actually physically integrated into the CPU are a very recent invention. Actually, AFAIK, the first mainstream CPU to include a GPU on the same package was the Intel Westmere in 2010. Before that, all iGPUs (Intel, Nvidia, AMD) were part of the mainboard chipset. And they were always called 'integrated' (you would be the first person to argue with that), long before they became part of the CPU. The 9400M and successors do not have their own memory, nor a data bus. They sit behind the default system RAM controller, just like in the modern SoC solutions.
 
Have you ever tried using one? Because they're generally loud-as-hell and in a metal frame would become pretty uncomfortable to use. When you're doing anything even slightly heavier Apple's Macbook Pro line already runs too hot to be comfortably used on your lap without pants on and putting worsening the heat-to-dissipation-area ratio will only make it worse.

I have. If not for the poor TFT lcd screen i would have kept it too.
 
I have. If not for the poor TFT lcd screen i would have kept it too.

The new Razer Blade (2014) 14" has a IGZO 3200 by 1800 display. In a 14", MacBook sized body. It has a quad i7, Haswell, up to 3.2 GHz, 8 GB of RAM, and NVidia 860 (maybe 850?) with 3GB of VRAM.

It supposedly pulls 1-2 hours of gaming battery and 5-6 hours of normal usage battery life.

Hell, if Razer can do it in literally the exact same body as a MBP (maybe 1" larger, but I'd love a 14" rMBP) at 0.7" thick...why not, Apple?
 
The new Razer Blade (2014) 14" has a IGZO 3200 by 1800 display. In a 14", MacBook sized body. It has a quad i7, Haswell, up to 3.2 GHz, 8 GB of RAM, and NVidia 860 (maybe 850?) with 3GB of VRAM.

It supposedly pulls 1-2 hours of gaming battery and 5-6 hours of normal usage battery life.

Hell, if Razer can do it in literally the exact same body as a MBP (maybe 1" larger, but I'd love a 14" rMBP) at 0.7" thick...why not, Apple?

I also think that Blade is a marvellous piece of engineering, but IMO, it is quite clear how Razer can do it. Blade's battery is on par with the 13" rMBP, which gives them additional space for bigger GPU and cooling system. The Blade's mainboard is bigger than that of the 15" retina model. Just look at the teardown pictures of the laptops — the difference is obvious (you can use the RAM chip size for comparison)
 
Erm. No you couldn't. Yeah, the GPU was physically bigger, but no chipset = no communications bus.

The fact that two things are on the same piece of silicon doesn't mean one is inteagreated into the other. If you really want to stretch the definition of the word you could even call a dedicated GPU "integrated" because it works in close relation with the CPU.

This is what Dictionary.com defines it as:
combining or coordinating separate elements so as to provide a harmonious, interrelated whole
organized or structured so that constituent units function cooperatively

Doesn't say anything about that could be interpreted as having to be on the same piece of silicon doesn't it?

Apple dumped NVIDIA's iGPUs in 2011 by using the Intel HD 3000 in the Sandy Bridge Macs.

Never disputed this, just said that they weren't any good until the HD 4000-series.
 
The fact that two things are on the same piece of silicon doesn't mean one is inteagreated into the other. If you really want to stretch the definition of the word you could even call a dedicated GPU "integrated" because it works in close relation with the CPU.

This is what Dictionary.com defines it as:

Oh come on Joe. The notion 'Integrated GPU' has a very clear meaning in the industry and has been there for over a decade. Trying to argue agains it is like saying that 'space ships' are actually 'space planes'. Your quoting of dictionaries is extremely naive in this context because this notion has very little to do with how the 'normal' meaning of the word (I say this as a linguist). You should rather quote Wikipedia or a tech website, if you must quote something.
 
The new Razer Blade (2014) 14" has a IGZO 3200 by 1800 display. In a 14", MacBook sized body. It has a quad i7, Haswell, up to 3.2 GHz, 8 GB of RAM, and NVidia 860 (maybe 850?) with 3GB of VRAM.

It supposedly pulls 1-2 hours of gaming battery and 5-6 hours of normal usage battery life.

Hell, if Razer can do it in literally the exact same body as a MBP (maybe 1" larger, but I'd love a 14" rMBP) at 0.7" thick...why not, Apple?

870 actually.
 
The new Razer Blade (2014) 14" has a IGZO 3200 by 1800 display. In a 14", MacBook sized body. It has a quad i7, Haswell, up to 3.2 GHz, 8 GB of RAM, and NVidia 860 (maybe 850?) with 3GB of VRAM.

It supposedly pulls 1-2 hours of gaming battery and 5-6 hours of normal usage battery life.

Hell, if Razer can do it in literally the exact same body as a MBP (maybe 1" larger, but I'd love a 14" rMBP) at 0.7" thick...why not, Apple?

I was talking about the previous version. I'd like to get the new one but I now have a desktop and a laptop, so I'd have a hard time arguing another.
 
The fact that two things are on the same piece of silicon doesn't mean one is inteagreated into the other. If you really want to stretch the definition of the word you could even call a dedicated GPU "integrated" because it works in close relation with the CPU.

Dictionary.com? Seriously?

I don't care what's on which bit of silicon. The traditional logical blocks of CPU, northbridge and southbridge are still applicable even if some parts have moved around physically since the PCI architecture of the early 90s.

Integrated graphics have been traditionally defined as being part of the mainboard chipset and utilising shared system memory. OK, so that's a bit woolly when used with laptops as there's no expansion slots to plug a discrete card into. The shared memory is the key criteria, as is the manufacturer.
 
I don't care what's on which bit of silicon. The traditional logical blocks of CPU, northbridge and southbridge are still applicable even if some parts have moved around physically since the PCI architecture of the early 90s.

Integrated graphics have been traditionally defined as being part of the mainboard chipset and utilising shared system memory. OK, so that's a bit woolly when used with laptops as there's no expansion slots to plug a discrete card into. The shared memory is the key criteria, as is the manufacturer.

If you set the definition at "shared system memory" (i.e taking a chunk of system RAM rather than having it's own memory), then what does that say about Nvidia's TurboCache or AMD's Hypermemory GPU's? Because a lot of them didn't just sit on their own dedicated piece of silicon, a lot of them sat on a their own dedicated PCB. Yes, a dedicated PCIe card that took a chunk of system RAM.
 
If you set the definition at "shared system memory" (i.e taking a chunk of system RAM rather than having it's own memory), then what does that say about Nvidia's TurboCache or AMD's Hypermemory GPU's? Because a lot of them didn't just sit on their own dedicated piece of silicon, a lot of them sat on a their own dedicated PCB. Yes, a dedicated PCIe card that took a chunk of system RAM.

Those cards had local frame buffer memory therefore are not integrated graphics ... and before you start about Iris Pro - the eDRAM is L4 cache, not frame buffer.

Graphics cards since the AGP days have been able to use system memory for storing textures, Hypermemory/TurboCache was just a new spin on GART with a clever sounding name that helped sell the cheaper PCs in your local retail emporium.
 
I also think that Blade is a marvellous piece of engineering, but IMO, it is quite clear how Razer can do it. Blade's battery is on par with the 13" rMBP, which gives them additional space for bigger GPU and cooling system. The Blade's mainboard is bigger than that of the 15" retina model. Just look at the teardown pictures of the laptops — the difference is obvious (you can use the RAM chip size for comparison)

Is there any reason why Apple can't do the same, and maybe price it at just a few hundred less than the 15" maxed rMBP? I'd love a RB14, it's just that Windows 8 sucks.
 
Is there any reason why Apple can't do the same, and maybe price it at just a few hundred less than the 15" maxed rMBP? I'd love a RB14, it's just that Windows 8 sucks.

The reason is that Apple does not seems to be interested in making those kinds of laptops. They were never after pure gaming performance, preferring mobility and battery life to it. With their technological potential, Apple is certainly able to build a quite good gaming laptop, but what benefit would that bring them? It makes very little sense to cater to the hardcore gamer market with the current state of OS X. And as to OS X Steam games — most of them are indie titles that happily run on Intel's integrated graphics.
 
I would be very concerned about heat dissipation in that already thin 13" chassis. The 15" w/ dGPU already gets too warm for my liking during games.
 
Those cards had local frame buffer memory therefore are not integrated graphics ... and before you start about Iris Pro - the eDRAM is L4 cache, not frame buffer.

So you've now set the cutoff point to be that the GPU can't have ANY memory of it's own, no matter if it's on it's on it's own piece of silicon or even it's own PCB?

While the 9400M was designed to be a cheap chip with no memory of it's own from the ground up, the 320M in the following 13" machines was literally a modified 9600M, which I believe you would define as a dedicated GPU, with it's own memory cut out. Even the 9400M was just a low end version of the the same architecture used in lots and lots of chips you'd have no objections classing as "dedicated".

I personally think your definition is stupid when it tries to class chips based on the same architectures and even derivatives of proper dedicated GPU's as "integrated" because they don't have memory of their own... It's like saying that the more luxurious Land Rovers (the ones Jeremy Clarkson calls "Wayne Wovers" and others call "Chelsea Tractors") aren't terrain vehicles because they've got too many shiny styling details and slightly lower ground clearance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.