Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Could be because you game on OSX and Apple's Nvidia drivers and the OpenGL game may just be a buggy combination. Try Windows and see how the GPUs perform over there. If it does badly it is probably faulty hardware, if it does well it is some OSX driver issue with the 750M.
Usually such things are driver issues, because afaik faulty hardware usually doesn't work at all or work well. The in between is usually not a bad chip but cooling issues or power supply issues that cause the GPU to throttle.
 
Could be because you game on OSX and Apple's Nvidia drivers and the OpenGL game may just be a buggy combination. Try Windows and see how the GPUs perform over there. If it does badly it is probably faulty hardware, if it does well it is some OSX driver issue with the 750M.
Usually such things are driver issues, because afaik faulty hardware usually doesn't work at all or work well. The in between is usually not a bad chip but cooling issues or power supply issues that cause the GPU to throttle.

Unfortunately, I'm one of those people who only has windows 8.1 and is unable to get it installed due to some error. So I cant test this yet… :(
 
All this says to me is that Apple should have used the 765m like I wanted the to.

Gonna need a bigger power supply (it'll pull way more than 85W while gaming), not to mention more cooling. And you can forget using that thing on battery for anything. Just totally unrealistic.

The problem with 2013's GPU options is not a mystery: A rebadge of Kepler chips with no efficiency improvements (or really any changes at all). The only difference between this year's chips and last is binning (better yields makes better binned chips cheaper).

Real GPU gains will come next year when Nvidia releases an actual new GPU generation (and Intel continues to try to close the gap, though it's unclear if they can).
 
That will hopefully change with Valves Steambox.
Intel aren't great at updating their drivers for gaming, whenever a new game comes out it is usually a long wait if ever for intel to optimise their drivers.

Good to hear! Might have a try of the beta drivers to see if that helps if they haven't sorted it by next week or so.

oh yeah its true they dont release updates after major game releases like nvidia does...
 
Unfortunately, I'm one of those people who only has windows 8.1 and is unable to get it installed due to some error. So I cant test this yet… :(

I got 8.1 installed via USB by formatting the partition myself and then boot via BIOS instead of EFI.
 
Gonna need a bigger power supply (it'll pull way more than 85W while gaming), not to mention more cooling. And you can forget using that thing on battery for anything. Just totally unrealistic.

The problem with 2013's GPU options is not a mystery: A rebadge of Kepler chips with no efficiency improvements (or really any changes at all). The only difference between this year's chips and last is binning (better yields makes better binned chips cheaper).

Real GPU gains will come next year when Nvidia releases an actual new GPU generation (and Intel continues to try to close the gap, though it's unclear if they can).

I agree with the fact Intel are going to struggle in the future with improving their iGPU performance. Silicon real estate is expensive, so they just can't keep cramming on more and more transistors and cores like Nvidia and AMD do, plus the real problem lies in not having lots of dedicated GDDR5 VRAM. I can't see the next Broadwell and Skylake ships coming loaded with the stuff, and until they do, Intel can't ever truly compete with a dGPU.
I'm sure there will come a point where Intel's iGPU is very capable and good enough. But they'll never catch up to a dGPU, I like the current config and options. An Iris or Iris Pro is perfectly capable for the vast majority of users, then they give users, like myself, the option to choose a very capable 750M with a healthy 2GB VRAM if we wish to game - something I hope Apple continues to promote with more OpenGL implementation and support as well as bootcamp.

... Maxwell will offer a solid performance boost without touching the TDP, I just hope Apple adopt it rather than go full iGPU.
 
Anyone here with the non-750m 15" rMBP model care to share how the Iris Pro runs games, and how it impacts the battery?

I opted for the 750m and the performance is great, although the battery gets eaten quickly with it (2-3hrs of gaming).
 
I got 8.1 installed via USB by formatting the partition myself and then boot via BIOS instead of EFI.

For me this did not help. It complained that it needs a GPT partition instead of MBR. (Or something like that! I forget...)
 
Anyone here with the non-750m 15" rMBP model care to share how the Iris Pro runs games, and how it impacts the battery?

I opted for the 750m and the performance is great, although the battery gets eaten quickly with it (2-3hrs of gaming).

I'll be getting my base rMBP 15" on Thursday.
Didn't want a dedicated GPU as it generates more heat/noise which I learned from my former MBPs.
For games like BF3/4 I use my PC. For anything else such as non demanding games, video editing, DVB-T recordings etc. my base rMBP 15" + external USB 3.0 2TB 2,5" HDD. I will not "bootcamp" my rMBP.
 
Gonna need a bigger power supply (it'll pull way more than 85W while gaming), not to mention more cooling. And you can forget using that thing on battery for anything. Just totally unrealistic.

No its not. Razer Blade has it in a 14in laptop just as thin as the Retina MacBooks, and has two fans. and the powerbrick isn't very big at all.
 
I will buy the new rMBP with 750m.

The only reason is that I have no PC and really want to play Battlefield (3 or 4).
I saw benchmarks of BF4 on 750m (on youtube) and it is really impressive! With 900p you can play it on medium Settings (and it is still a Beta and its performance will increase with the final version) very smoothly. With 1366x768 you can play it on high settings.
Maybe the final version is playable with high settings in 900p.

What do you think about the Iris Pro in comparison to these benchmarks?
 
Everything I've seen concerning the Iris Pro and benchmarks shows it doing well on the synthetics, trust me, as soon as you try gaming (especially on things like BF3/4) the lack of cores, overall weaker architecture and almost complete lack of VRAM will seriously hammer it's performance. You really need the 750M for anything demanding.

One of the cool things I like about going for the 750M version of Haswell is the CPU can take use of what is essentially a L4 cache, giving a good boost for the CPU in certain tasks, while the GPU has it's own sizeable stash all to itself.

Just wait for Anandtech's review, no doubt we'll see some clear evidence rather than just conflicting facts and heresay.
 
Just wait for Anandtech's review, no doubt we'll see some clear evidence rather than just conflicting facts and heresay.

waiting+for+a+standard.JPG
 
No its not. Razer Blade has it in a 14in laptop just as thin as the Retina MacBooks, and has two fans. and the powerbrick isn't very big at all.

Uh??

The razer uses a 150W adapter, the MBP an 85W.

They are two completely different machines, the razer has compromised on a damn lot of things so that it can get that GPU on board, because that laptop is marketed for gamers, so it can do that.

The MBP on the other hand is not marketed for gaming, it can't (and Apple won't) sacrifice almost everything else so they can get a bit more gaming performance.

Let's start making the count, if I want a 765 instead of 750 I have to give up on my:
45w CPU to get a 35w one.
My retina display to get a not even 1080p (1600x900) 14" and bigger bezel than rMBP.
35% battery to accommodate a 70w battery instead of 95w.
Totally different thermals and noise, starting with having vents on the bottom of the laptop so I can't use it on my lap.
And many others...

This is fine if you're going for the specific guy who wants only to game on the laptop. But personally, and I'm a gamer, I wouldn't want to give up on all that just for that GPU. If I want gaming performance, there's my desktop.

Stop it with the razer already, it's obvious you cannot do the same thing as easily as you would like us to believe.
 
quick runs of Luxmark v2.1beta2

Iris = 601points.
750 = 639points.

I ran LuxMark but noticed that both the 750 and Iris Pro were enabled. I used the option to render with only selected devices. I got this:

Iris Pro only = 611
GT 750M only = 148
Both enabled = 655
 
I ran LuxMark but noticed that both the 750 and Iris Pro were enabled. I used the option to render with only selected devices. I got this:

Iris Pro only = 611
GT 750M only = 148
Both enabled = 655

Looks good for running OpenGL on the 750M while OpenCL runs on the Iris at the same time.
 
I recently did a set of gaming-centric tests for the Iris Pro 5200 vs. the 750m. In my case, the 750m beat the Iris Pro by 25%-75% in almost every test. Looks like it's still worth it for this generation, at least if you're a gamer or otherwise work with high-performance graphics. (Also, I expect that the Mac Pro will cause a lot of high-performance apps to get updated with OpenCL support, which will be up to twice as good on Macbooks with two graphics chips.)

PS, you can now update the Bootcamp Nvidia driver through Nvidia's installer.
 
This one is probably more useful to Mac owners since it compares the GT 750 M and iris Pro directly and on a Mac.

http://www.barefeats.com/rmbpc2.html

I wonder how much of that comes down to drivers. Articles like this portray the gap as being far smaller.


----------

It's also a piece of junk and it runs Windows. It is also more expensive than a Mac for some reason.

No its not. Razer Blade has it in a 14in laptop just as thin as the Retina MacBooks, and has two fans. and the powerbrick isn't very big at all.
 
DDR3 is nothing like GDDR5 when it comes to gaming performance. Just look at the performance delta it causes between the Xbox One (DDR3 + EDRAM) and the PS4 (GDDR5).
It will make a big difference, it's much faster, all dedicated and designed with gaming in mind.

I hate to be picky, but the Xbone uses eSRAM, which has a lower latency than eDRAM.
 
:)
I hate to be picky, but the Xbone uses eSRAM, which has a lower latency than eDRAM.

My mistake :).
Just an L4 cache. M$ really cheaped out.
They should have used GDDR5 or DDR3 + a 128/256MB eSRAM cache. 32MB is too small for 1080P gaming. It's a botched idea, with inferior hardware and a resource hungry OS - not what you want for a dedicated gaming machine.
 
:)

My mistake :).
Just an L4 cache. M$ really cheaped out.
They should have used GDDR5 or DDR3 + a 128/256MB eSRAM cache. 32MB is too small for 1080P gaming. It's a botched idea, with inferior hardware and a resource hungry OS - not what you want for a dedicated gaming machine.

Well devs have better access to it than just some L4 cache: unlike crystalwell, it's addressable like normal memory, but the issue is that it's harder to work with than just a unified pool of GDDR5. I'm guessing that it can be useful for storing the render buffers, which can be very useful for deferred shading. 32 MB is enough for a G-buffer at 1080p, or even 4k. And there can be some more cached assets in the eSRAM.

But yeah, it's just simpler to have a pool of GDDR5. It's been shown by the launch titles that this is more effective than slow memory with a fast cache.
 
Last edited:
Intel said themselves that internal testing showed that 32MB is what they'd actually have needed. They went for 128mb because it it is just dram and cheap, so wth. 128mb esram would be ridiculously expensive.
It is a cache it has nothing to do with VRAM sizes. It just needs to be big enough to remove enough bandwidth need from the main memory access with typical workloads. They did test how big that would need to be.

Microsoft also doesn't use it like an L4 cache but only as a special cache for the GPU with its own address space. The CPU on the XBone is probably faster than the PS4 not just because of the higher clocks but also because CPUs prefer the lower latency of DDR3 over GDDR5. Intel will never use GDDR5 because it is bad for CPU performance. They will improve on their edram and use whatever is the cheapest form of memory past that.
Once they use stacked memory like HMC they might at some point go edram only because that will be enough and it saves space.

Steam boxes and Android boxes might kill PS4 and XBone because 8 year refresh cycle. A Steam box in 5 years is probably a beast. If they nail the controllers and get enough games, Sony and MS are in trouble.
 
Intel said themselves that internal testing showed that 32MB is what they'd actually have needed. They went for 128mb because it it is just dram and cheap, so wth. 128mb esram would be ridiculously expensive.
It is a cache it has nothing to do with VRAM sizes. It just needs to be big enough to remove enough bandwidth need from the main memory access with typical workloads. They did test how big that would need to be.

Microsoft also doesn't use it like an L4 cache but only as a special cache for the GPU with its own address space. The CPU on the XBone is probably faster than the PS4 not just because of the higher clocks but also because CPUs prefer the lower latency of DDR3 over GDDR5. Intel will never use GDDR5 because it is bad for CPU performance. They will improve on their edram and use whatever is the cheapest form of memory past that.
Once they use stacked memory like HMC they might at some point go edram only because that will be enough and it saves space.

Steam boxes and Android boxes might kill PS4 and XBone because 8 year refresh cycle. A Steam box in 5 years is probably a beast. If they nail the controllers and get enough games, Sony and MS are in trouble.

Sony and M$ will never wait as long this time to do a new refresh, the costs to recoup are far, far smaller this time round. I reckon it will be 5 years before we see the next generation which is fine by me. It will take them two years to really start to stretch current systems and with their PC architecture so it is good for future and further developments for whatever generation succeeds it, which I imagine will be in the same PC style architecture.

As for the ESRAM I read that developers are already complaining it is too small and yes you are right it's a buffer for rendering, but even still, 32MB is small. As for the DDR3's lower latency vs GDDR5's higher bandwidth - remember these are gaming machines, yes they have other functionality now but that is still easily serviced by GDDR5. I would much rather chose GDDR5 over DDR3 + ESRAM for a dedicated gaming platform.

The PS4's clock speed is still unknown, I've seen guesses it's at 1.6GHz vs the XBO's 1.75GHz, but I also read it has a dynamic clock speed. Have a look at redgamingtech, they done a number of interesting articles on the hardware inside these machines.

DDR4 is just round the corner anyway.
 
As far as Cache goes one can just do a statistical analysis and easily see how the cache hit rate is at different sizes. The Esram if it has to specifically be put to use by developers is something different. Developers will quickly complain it isn't enough as M$ took the size such that it helps something most of the time. I think M$ is hoping that the chip with esram is going to yield them more cost savings with die shrinks. Maybe 8 years would be too long but they will probably want to enjoy the cheaper production of their console over the PS4 for a while. GDDR5 just won't get any cheaper.

As for the latency, I brought that up, to explain why Intel will never adopt GDDR5 ever. On a PS4 it maybe worth it(CPU cores are really an afterthought on that one) but for Intel it isn't an option.
I thought PS4 stayed at the 1.6Ghz and didn't intend to change that. It wouldn't make a huge difference anyway I think.

DDR4 is around the corner but still not as fast bandwidth wise as GDDR5 which has been around forever. The future is edram(hmc and such for a wide fast low latency, low power cache layer) + cheap ddr4 if needed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.