Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why is apple so stingy with graphics cards? Everything is either terrible or way over priced.
Ill never buy an integrated GPU only machine after the POS GMA950 in first gen apple&intel products.

I also had a white MacBook with the GMA950, it was woeful. I promised myself that I would never again purchase a computer with only an Intel integrated GPU.
 
Why is apple so stingy with graphics cards? Everything is either terrible or way over priced.
Ill never buy an integrated GPU only machine after the POS GMA950 in first gen apple&intel products.

If Apple were to put the "best" mobile GPU into their laptops, people would be complaining about how the Haswell rMBP has crappier battery life and gets too hot on the lap. :rolleyes: They can't win with everyone.
 
I also had a white MacBook with the GMA950, it was woeful.

Right, only that Iris is not GMA!

GMA was integrated in the northbridge, one node behind the main cpu, if I remember correctly, just because they figure it out that there's some additional space to spare.
Iris is in the same die as the cpu, and consumes around half the silicon area dedicated to the chip!That's before we even include L4 cache!

Just couldn't understand how your bad experience with crappy GMA is in any way relevant to 5000-series graphics?
 
I'm holding out the the new rMBP.

I'd buy without an dedicated GPU if they drop the prices. If they don't then I won't.
 
I also had a white MacBook with the GMA950, it was woeful. I promised myself that I would never again purchase a computer with only an Intel integrated GPU.

I'm glad I'm not the only one who experienced that. It opened my eyes and tought me about GPUs (I was only a kid)

----------

Right, only that Iris is not GMA!

GMA was integrated in the northbridge, one node behind the main cpu, if I remember correctly, just because they figure it out that there's some additional space to spare.
Iris is in the same die as the cpu, and consumes around half the silicon area dedicated to the chip!That's before we even include L4 cache!

Just couldn't understand how your bad experience with crappy GMA is in any way relevant to 5000-series graphics?

I don't understand how you don't understand.

Made by intel, integrated and crap..

So why would intel's new GPUs also made by intel and integrated be any different?

All integrated GPUs are terrible. If you are happy then good for you but the majority of people won't be.
 
Right, only that Iris is not GMA!

GMA was integrated in the northbridge, one node behind the main cpu, if I remember correctly, just because they figure it out that there's some additional space to spare.
Iris is in the same die as the cpu, and consumes around half the silicon area dedicated to the chip!That's before we even include L4 cache!

Just couldn't understand how your bad experience with crappy GMA is in any way relevant to 5000-series graphics?

I realise that my experience with the GMA950 has no direct relevance to the 5000 series, but to be honest, that iGPU left a rather bad taste in my mouth (I'm sure I'm not alone this).

Obviously the Iris Pro is quite capable, but I'm not prepared just yet to run a retina display with only the Iris Pro iGPU. But I will monitor the benchmarks and real world tests closely, as soon as they surface that is, hopefully they'll change my mind.
 
I don't understand how you don't understand.

Made by intel, integrated and crap..

Because it's totally different architecture and totally different implementation.

GMA was an afterthought, "integrated" in the chipset, a node behind, sitting there just for 2D and media stuff.

Iris is completely different beast. Not to mention that "integrated" (same die as cpu) is actually good for performance. Just take a look at the new gen consoles. PS4 and Xbox One both have cpu and gpu on the same die.
Not because it's cheaper, quite the opposite is true, in fact. But having unified memory and ability of the gpu to share L2 caches (in case of Iris - L3 and massive L4) with cpu is a big win for many workloads. Theoretical floating point performance of HD 5200 and 5100 should be around 70% of XBox One. Not bad at all! How was GMA comparable to PS3 and Xbox360? :D
"Integrated" means, well... integration, and it's a good thing in silicon-based circuitry.

But you want dedicated graphics? Well, Intel "dedicate" half the silicon area, on their leading edge 22nm process to gpu cores, then "dedicate" 128MB on-package, low latency memory to provide the required bandwidth.
Now- it's not perfect. In some area it's much better, in some, not so- L4 cache, for example, wouldn't be sufficient to completely replace the need for fast and big memory pool (gddr5).
But overall- it's as "dedicated" as you can get.

Semantics. :)
 
My guess. :apple: will drop the price of the 15" with $300 :)

That would be good.

I really need a 15" (13" is too small for my needs) and I do some light gaming, the Iris Pro would probably be ok for this. It's just if they removed the dedicated card and the price didn't reflect this I'd feel like I was getting ripped off.
 
It's not like there's absence of desire for that. New Macbook Air, anyone?

- HD 5000 equipped cpu's- quite a bit more expensive than last year ones (yes, Apple doesn't pay list price for them, but so doesn't any other big OEM, and more silicon simply equals bigger price).
-LPDDR3 memory is also more expensive, on top of that they use stacked dies, so just four modules for 4GB capacity.
-I suspect new ac-capable cards doesn't help either, probably so pci-e ssd's (although, that's probably a little bit of a stretch).

But hey, surprise! It's cheaper!

So yeah, I expect them to lower the prices of rMBP's, but it shouldn't have to do with gpu of choice, simply because Iris is by no means inferior technology ( in the absence of discrete gpu chip and gddr5 memory, Foxcon would have to do less soldering, however).
 
Folks are dismissing gaming as if it requires an entirely different build. That may be so if one is buying a $600 budget laptop, or even a $1200 intermediate laptop.

If, however, I'm going to drop $2000 - $3000 on a laptop, it darn well better be able to play a fairly recent game at decently high graphics settings with decently high frame rates. It doesn't have to be the professional gamer's choice, but...

If an iGPU (Iris Pro) can deliver then great, I'm all for the benefits of integrated-only graphics, but if it can't advance the all-around capability of the rMBP (including for games), then it doesn't belong in the new spec.

>
 
I think you probably misunderstand, apparently, it turns out that rMBP is the weapon of choice for AAA game developers, that's how "pro" and "games" came together. So it turns out that Intel-only graphics in macbooks portend the end of their black magic science and gaming industry as we know it. Go figure. :D
 
looking at numbers at Anandtech it looks like that 8W of TDP change and power consumption gives 3-5 FPS in games. At this point its 55W. Apple can go higher with the TDP of Processor and iGPU even to last years(78W) but that will give only 10 FPS more at max, will not blown GT650M, it will be the same, and at this point of TDP the battery life will be the same. So where is the benefit of it?

ONLY it would help if Apple would go iGPU for BASE model of Retina with option of Configuring it with higher clocked i7 with HD4600 and dGPU like GT750M/HD8870M. The base model with iGPU only can give at least 300 dollars lesser price tag(Imagine a rMBP with 2.0 GHz Processor, 8GB RAM, Iris Pro and 256 GB SSD for 1799$ and a 2.7 GHz/16GB RAM/GT750M/512 GB for 2499$).

Now it makes a lot more sense.
 
So, are you saying that lower end model should be faster than the high end in OpenCL apps, Apple's bread and butter FCPX included, just so people can play Grid 2 in Windows with 3 FPS less on ultra settings? :D
 
For work I use a lot of CAD software (Rhino, SolidWorks, etc.) on a 2007 15" 2.4 4GB MBP. Granted I cannot work on massive product files with my current rig, I can't see a rMBP without a dGPU using 16GB of RAM performing poorly. Yes, it would perform better with a dedicated, but 16GB of RAM for 3D modeling seems like more than enough unless I decide to work on massive 200+GB files.
 
I think you probably misunderstand, apparently, it turns out that rMBP is the weapon of choice for AAA game developers, that's how "pro" and "games" came together. So it turns out that Intel-only graphics in macbooks portend the end of their black magic science and gaming industry as we know it. Go figure. :D

I think you probably never worked with iOS Simulator... while developing any project that even remotely uses OpenGL (that's about 99% of mobile apps made now)

News flash: it triggers the dGPU for a reason.

So yeah, the rMBP may still be used by developers to develop games. Maybe not AAA titles, but at least it should be capable enough to simulate Angry Birds.
 
Well, considering the last gen HD 4000 offered 16 execution units vs. 40 in HD 5200, yes, I am sure it will be up to the task. And if it really is in the new rMBP, you can be sure that IOS simulator will be one of the first piece of software to be optimized for the more specific features of the architecture (like L4 cache).
 
Well, considering the last gen HD 4000 offered 16 execution units vs. 40 in HD 5200, yes, I am sure it will be up to the task. And if it really is in the new rMBP, you can be sure that IOS simulator will be one of the first piece of software to be optimized for the more specific features of the architecture (like L4 cache).

I just want to say that imo you're the only one on this thread that's being realistic about this whole topic. That is all :apple:

Oh, and Anand mentioned a long time ago in the mega-review of the 15 rMBP that this was going to be a strong possibility. I trust him as a tech insider more than anyone else:

http://www.anandtech.com/show/6023/the-nextgen-macbook-pro-with-retina-display-review/8

At each IDF I kept hearing about how Apple was the biggest motivator behind Intel’s move into the GPU space, but I never really understood the connection until now. The driving factor wasn’t just the demands of current applications, but rather a dramatic increase in display resolution across the lineup. It’s why Apple has been at the forefront of GPU adoption in its iDevices, and it’s why Apple has been pushing Intel so very hard on the integrated graphics revolution. If there’s any one OEM we can thank for having a significant impact on Intel’s roadmap, it’s Apple. And it’s just getting started.

Sandy Bridge and Ivy Bridge were both good steps for Intel, but Haswell and Broadwell are the designs that Apple truly wanted. As fond as Apple has been of using discrete GPUs in notebooks, it would rather get rid of them if at all possible. For many SKUs Apple has already done so. Haswell and Broadwell will allow Apple to bring integration to even some of the Pro-level notebooks.
 
Well, considering the last gen HD 4000 offered 16 execution units vs. 40 in HD 5200, yes, I am sure it will be up to the task. And if it really is in the new rMBP, you can be sure that IOS simulator will be one of the first piece of software to be optimized for the more specific features of the architecture (like L4 cache).

"L4 cache" is basically glorified video memory.

Except it isn't. It's slower than GDDR5 that the 650M uses. And there's only 128MB of it.

That's why the HD 5200 sucks at higher resolution, because higher resolutions are more bandwidth-starved.

And how many execution units there are in the GPU shouldn't matter. If we go by that logic, the 650M with 384 shader units should completely demolish HD 5200 with only 40 units.

And the reality is... it does demolish HD 5200. Especially at higher resolutions. Intel can "cheat" with their drivers, but raw performance and raw numbers alone, the HD 5200 is nowhere close to the 650M.

And Apple overclocked the 650M in the last rMBP even further so its performance is far higher than a stock 650M.

No matter how you look at it, going to HD 5200 is a regression in performance.

The only saving grace is OpenCL performance. But... oh... good luck actually finding any application that makes extensive use of it. OpenCL is used only for very specific things in a small number of applications right now. Take Photoshop for instance. OpenCL is only used with the Liquefy/Liquify tool. Everywhere else, OpenGL performance is far more important, and Intel's GPU solutions have sucked with OpenGL since forever. You can ask any enthusiast (not even professional), and they'll tell you.
 
And how many execution units there are in the GPU shouldn't matter. If we go by that logic, the 650M with 384 shader units should completely demolish HD 5200 with only 40 units.
Since Intels EUs are 16 wide vector units one would have to compare 384 vs 640 (40*16) to make a somewhat fair comparison.
Nvidia is better at loading its units at max efficiency but it has fewer of them and they don't even have as high a turbo. So by the stupid dumb logic the Iris is more powerful.

Adding some close by cache like that L4 is for the most part a power savings feature. It is just the cheapest way to handle bandwidth requirements with minimal power use. It is not supposed to be as big as a full 2GB VRAM, it only should enough to reduce the load on the two 64bit DDR3 channels. There is a reason smartphone GPUs worked differently because if they could keep data close they could save lots of power.
Intel seems to like that solution so much that DDR4 seems not even make it on the Broadwell roadmap.
They obviously cannot use GDDR5 like the PS4 because that would kill CPU performance as GDDR is bandwidth optimized and bad in latency. Game developers may be willing to program around that problem in games but for everyday desktop applications it wouldn't be good.
Putting that L4 in place adds a lot of bandwidth for a third of the power cost of anything else.

The thing about all this is that iGPUs simply have more options to yield more power efficiency. Today it is about similar to somewhat lower performance but each generation it will be harder for Nvidia to keep up. They will be pushed more and more into the higher TDP classes to make a case for their GPUs.
The 2010 MBP was basically a 73W notebook. Now we are at 100W. With Haswell it will go down to about 60-70W. (10W screen, Turbo) The worth of those added 40W dGPUs should deliver more than justnsomewhat faster especially for Apple with their crap automatic switching that turns on unnecessarily way too often.

If one complains about OpenGL performance, I would really look more at OSX performance. Drivers are a big part of that and Apple will probably try to make sure there isn't a big difference (to 650M) to complain about.

People in this thread are comparing the GMA 950 which is light years away in performance from even a HD 3000. GPU performance has four variables.
  1. How much transistors are dedicated for the GPU (huge 500 mm2 GPU is usually a lot faster than one much smaller).
  2. Bandwidth (as all the processing power is no use if not fed)
  3. Architecture
  4. Drivers

  1. How much transistors are used for the GPU has changed a lot. iGPUs used to be mini on old processes. AMDs APUs started first with putting almost half the APU size for the GPU. Intel is now at the same level today.
  2. Bandwidth is only an issue for the faster GPUs. Intel got the L4 Cache to break the barrier of slow DDR3.
  3. Architecture changed a lot. About 5-7 years ago Intel started hiring lots of GPU experts. It takes a while to get results but this new GPU architecture is made by people who used to work at Nvidia/AMD/others. That has nothing to do with that little bit of GMA like afterthought GPU.
  4. Drivers have also picked up and don't really matter as much on OSX anyway. Hd 4600 is doing in games significantly better than HD 4000 while the optimized synthetic are more in line with the small EU increase. They obviously have some people working on that. Obviously not like Nvidia that even helps out game developers when they are still developing but on their end it is not as bad as it used to be.

The HD 5200 will definitely not be as fast as the current Gen dedicated alternatives but the real issue is whether that difference justifies the power difference. Nobody complained about the 6750M being crap and the Iris Pro will definitely be better than that one.
I think this generation around a notebook would have to have a 765M or faster to display a big enough performance difference to really be worth it.
 
I (and probably many others) might be overreacting to nothing but if the dGPU is gone for good, then they should change the name to just the retina MacBook.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.