I (and probably many others) might be overreacting to nothing but if the dGPU is gone for good, then they should change the name to just the retina MacBook.
So, the "Pro" label is for dGPU users...
I (and probably many others) might be overreacting to nothing but if the dGPU is gone for good, then they should change the name to just the retina MacBook.
Well, considering the last gen HD 4000 offered 16 execution units vs. 40 in HD 5200, yes, I am sure it will be up to the task.
Compared to the 384 execution units in the 750m that's almost competitive!
And how many execution units there are in the GPU shouldn't matter. If we go by that logic, the 650M with 384 shader units should completely demolish HD 5200 with only 40 units.
Intel execution units are 16-wide, Nvidia cores are 2-wide. Do the math.
So, I guess I'm canceling my Titan order then. It turns out that I'd better settle for GT 650, 2688 cores wouldn't matter anyway!![]()
Core Speed 400 MHz vs. 967 MHz
Actual Shader Performance 784 vs. 384
Since Intels EUs are 16 wide vector units one would have to compare 384 vs 640 (40*16) to make a somewhat fair comparison.
Nvidia is better at loading its units at max efficiency but it has fewer of them and they don't even have as high a turbo. So by the stupid dumb logic the Iris is more powerful.
Adding some close by cache like that L4 is for the most part a power savings feature. It is just the cheapest way to handle bandwidth requirements with minimal power use. It is not supposed to be as big as a full 2GB VRAM, it only should enough to reduce the load on the two 64bit DDR3 channels. There is a reason smartphone GPUs worked differently because if they could keep data close they could save lots of power.
They obviously cannot use GDDR5 like the PS4 because that would kill CPU performance as GDDR is bandwidth optimized and bad in latency. Game developers may be willing to program around that problem in games but for everyday desktop applications it wouldn't be good.
Putting that L4 in place adds a lot of bandwidth for a third of the power cost of anything else.
The thing about all this is that iGPUs simply have more options to yield more power efficiency. Today it is about similar to somewhat lower performance but each generation it will be harder for Nvidia to keep up. They will be pushed more and more into the higher TDP classes to make a case for their GPUs.
The 2010 MBP was basically a 73W notebook. Now we are at 100W. With Haswell it will go down to about 60-70W. (10W screen, Turbo) The worth of those added 40W dGPUs should deliver more than justnsomewhat faster especially for Apple with their crap automatic switching that turns on unnecessarily way too often.
If one complains about OpenGL performance, I would really look more at OSX performance. Drivers are a big part of that and Apple will probably try to make sure there isn't a big difference (to 650M) to complain about.
[*]Bandwidth is only an issue for the faster GPUs. Intel got the L4 Cache to break the barrier of slow DDR3.
[*]Architecture changed a lot. About 5-7 years ago Intel started hiring lots of GPU experts. It takes a while to get results but this new GPU architecture is made by people who used to work at Nvidia/AMD/others. That has nothing to do with that little bit of GMA like afterthought GPU.
The HD 5200 will definitely not be as fast as the current Gen dedicated alternatives but the real issue is whether that difference justifies the power difference. Nobody complained about the 6750M being crap and the Iris Pro will definitely be better than that one.
I think this generation around a notebook would have to have a 765M or faster to display a big enough performance difference to really be worth it.
If the stock 650M can leave Iris Pro behind on benchmarks, then I'm sure you'll see an even more pronounced difference with an overclocked 650M.
Yes it still looks a lot different than the number 40 and accounting for clocks the nvidia at 900Mhz is still down.Nope. If you want to compare vector units, nVidia's are 2 vector units in 1 (pixel and vertex), so nVidia would be 768. That's 768 vs 640.
The point on smartphones is that they use(d) tile based rendering which can reach better performance with the limited bandwidth.Smartphones share system memory with graphics memory. Some smartphones (Android) have dedicated video memory, though.
Once you implement a seperate gddr5 memory controller you have effectively a dedicated GPU. That would be completely pointless. Desperately trying to put that on one package would shrink the logic board but probably be less efficient than a CPU + dGPU system (given that it is a Intel dGPU who have never dealt with GDDR5 before).No one said the CPU had to use GDDR5 for system memory. There is still a memory controller for DDR3 integrated into the CPU die, you know...
That said, they could have gone dedicated GDDR5 for Iris Pro. But they decided not to.
Reason: it would make the die size much bigger and add even more heat since they need to implement a separate memory controller just for the iGPU.
If they don't integrate it onto the CPU die, then the design of their motherboard wouldn't make sense since the motherboard would have to incorporate some sort of video memory... that's only used if you put in a CPU with any Iris GPU.
In the end, the cost (GDDR5 does cost more) and disadvantages (bigger die size, more heat, increased TDP) are just not worth it.
What is when the external display is on. I use an external almost all the time when the notebook is on my desk. All I do with two screens is browse and playing videos as the most demanding stuff but the fans have to deal with a useless dedicated GPU.Nope. Install gfxCardStatus on Mavericks and you'll see.
The dGPU kicks in ONLY when Photoshop or something else is running.
It's all iGPU otherwise.
That's how Mavericks increased battery life so much.
That surprises me. If they go for all Intel, I assume have to do what they can. Why would they want to do it otherwise. It is not like adding a 760M would be impossible. If MSI can do it with 22mm thickness and only one fan, Apples two fans should be able to handle it even without the 37W CPU.Intel's drivers on Linux are still far faster than Intel's drivers on OSX when it comes to OpenGL performance. And yeah, that applies even now. Unless Apple does a 180 turn, I don't expect OpenGL performance for Iris Pro on OSX to even match 650M, which is pretty much on par with its Windows counterparts now because nVidia has been stagnant with their drivers for a while.
The only Iris Pro benchmarks I have seen so far are from Anandtech and there is only one exception where high settings let Iris Pro loose ground and that is Battlefield 3. That could have all sorts of reasons. The other benchmarks look just fine.Also Iris Pro has far too many hardware limitations (TDP limiting Turbo, memory bandwidth not high enough, only 128MB of high-bandwidth memory, etc...) for it to match 650M. Those limitations already showed up on benchmarks at high resolutions. I don't think those will change unless Apple purposefully cripple the 650M to make Iris Pro look better.
Who needs 100% efficiency. The only goal is to get a high enough L4 cache hitrate so the 2 DDR3 channels can handle the load, meanding they aren't actually running at 100% load. If they did, chances are the processor is quite often starved for input data.Note: assuming 100% efficiency between the 2 channels of DDR3 and L4 cache, Iris Pro does not even have as much bandwidth as last year's 650M with GDDR5.
And if you're in the tech industry, you'd know 100% efficiency is a pipe dream. Especially when you're considering you have 3 different buses (2 DDR3 channels and 1 L4 cache) to worry about.
To be fair Atom is really behind in just about everything. It is 32nm, no integrated memory controller, a GPU from someone else. It was more of a prototype to work out whether an x86 core can be power efficient enough. The real smartphone Intel CPU will be Merrifield which is 22nm with Intel GPU and a fully integrated SoC.Yeah, and even now, for true power saving, Intel still has to rely partially on PowerVR GPUs for their super low power Atom chips.
Their project Larrabee was cancelled because its performance was not up to par.
I don't think Intel has had any real GPU success story to boast about up to this point. And judging from benchmarks, Iris Pro may break the norm, but it is still not comparable even to last year's dGPU.
As mentioned Anand actually compares the 900Mhz 650M. Not sure what you try to point out with that. If he compared the 650M at default clocks Iris Pro would probably look quite favorably. Good thing he didn't since the 650M was rarely ever used with standard clocks. Almost all high end notebooks clocked it at 900Mhz or even higher.Please keep in mind that Apple did overclock the 650M in the rMBP to either matching or past 660M performance. It's truly 15-20% faster than stock speed (750MHz stock vs 950MHz on the rMBP).
See above.If the stock 650M can leave Iris Pro behind on benchmarks, then I'm sure you'll see an even more pronounced difference with an overclocked 650M.
A 765M and a proper optimus like driver would definitely be better. If Apple believed in options, they would add a low end Intel only with great thermals and battery life and something with as much speed as can possibly be cooled. Apple usually doesn't believe in confusing its customers with too many options and given that 4950HQ geekbench it seems quite certain they think Iris Pro is enough for what people use their MacBook Pros for. The MacBook Pros have never been about best possible performance.Granted, it's not crap, but it's still a regression in graphics performance no matter how you want to spin it. And obviously, even with an overclocked 650M, the rMBP still craves more performance.
Where power saving is concerned, as noted, Mavericks changed things significantly and the dGPU isn't kicked in unless very specific professional applications (Photoshop, AutoCAD, Maya) are open.
In that case, I'd think that for power saving, HD 4600 would make a lot more sense (considering now HD 4000 is enough to handle the desktop smoothly), and then couple it with a 765M or something for those intensive moments.
Anand was using the rMBP, if you mean those benchmarks.
Yes it still looks a lot different than the number 40 and accounting for clocks the nvidia at 900Mhz is still down.
The point on smartphones is that they use(d) tile based rendering which can reach better performance with the limited bandwidth.
Once you implement a seperate gddr5 memory controller you have effectively a dedicated GPU. That would be completely pointless. Desperately trying to put that on one package would shrink the logic board but probably be less efficient than a CPU + dGPU system (given that it is a Intel dGPU who have never dealt with GDDR5 before).
Sony/AMD went all GDDR5 for a reason because both is just in no world a good idea. No resource sharing, which means no load balancing, huge die size with terrible efficiency.
What is when the external display is on. I use an external almost all the time when the notebook is on my desk. All I do with two screens is browse and playing videos as the most demanding stuff but the fans have to deal with a useless dedicated GPU.
Also when you do presentations with a projector attached a dGPU is waste of power.
The only Iris Pro benchmarks I have seen so far are from Anandtech and there is only one exception where high settings let Iris Pro loose ground and that is Battlefield 3. That could have all sorts of reasons. The other benchmarks look just fine.
The 650M with DDR3 memory is not all that much slower which suggests that a GPU at that performance level really doesn't need all the bandwidth it has. 50GB/s for that L4 cache should be plenty for Iris Pro.
To be fair Atom is really behind in just about everything. It is 32nm, no integrated memory controller, a GPU from someone else. It was more of a prototype to work out whether an x86 core can be power efficient enough. The real smartphone Intel CPU will be Merrifield which is 22nm with Intel GPU and a fully integrated SoC.
As mentioned Anand actually compares the 900Mhz 650M. Not sure what you try to point out with that. If he compared the 650M at default clocks Iris Pro would probably look quite favorably. Good thing he didn't since the 650M was rarely ever used with standard clocks. Almost all high end notebooks clocked it at 900Mhz or even higher.
A 765M and a proper optimus like driver would definitely be better. If Apple believed in options, they would add a low end Intel only with great thermals and battery life and something with as much speed as can possibly be cooled. Apple usually doesn't believe in confusing its customers with too many options and given that 4950HQ geekbench it seems quite certain they think Iris Pro is enough for what people use their MacBook Pros for. The MacBook Pros have never been about best possible performance.
True that the MacBook Pro has never been about best possible performance. But I haven't seen any year in which they announce that they are having a regression in graphics performance all across the board.
Clocks aren't everything, but number of processor does indicate performance... almost in every case.
And how many execution units there are in the GPU shouldn't matter.
Tile-based rendering actually uses the CPU for the first part of the composition process to split tiles, so in essence, CPU cache is used as a buffer for the rendering.
You must be joking...
Because in Anand's benchmark, the only benchmark where Iris Pro actually matches the 650M is Grid 2.
It's the same everywhere except for Grid 2, because this happens with Grid 2 for some reason:
Image
Image
Same as everything else, but then:
Image
That suggests that either nVidia's drivers are poorly optimized for the game at those settings, or Intel is omitting something in their drivers to achieve higher performance.
Really?
Benchmarks say otherwise:
3DMVantage GPU score:
DDR3 29671
GDDR5 35334
Difference +19%
3DM11 GPU score:
DDR3 2145
GDDR5 2156
Difference +0.5%
Heaven 2.5
DDR3 750
GDDR5 777
Difference +3.6%
I am with icedragon on this one for sure. They may as well bring back the line of the MacBook, I consider these new retina gadgets MacBooks anways. They have a really - really - really long ways still to go before they earn the PRO part which the 17"MBP still has the title of.
No matter what anyone wants to think. Or justify their chasing after the latest & greatest trends out there... The 17"MBP is the only MacBook Pro because it always performs well and most efficiently. The r-MacBook gadgets are a great addition to aid in super fast but super small tasks - for sure, but not on their own to be depended on performing huge tasks in any timely order.
I am with icedragon on this one for sure. They may as well bring back the line of the MacBook, I consider these new retina gadgets MacBooks anways. They have a really - really - really long ways still to go before they earn the PRO part which the 17"MBP still has the title of.
No matter what anyone wants to think. Or justify their chasing after the latest & greatest trends out there... The 17"MBP is the only MacBook Pro because it always performs well and most efficiently. The r-MacBook gadgets are a great addition to aid in super fast but super small tasks - for sure, but not on their own to be depended on performing huge tasks in any timely order.
Wow, some of the most unqualified asinine comments I've heard yet on this thread. Unsubscribing.
So, the "Pro" label is for dGPU users...![]()
So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.
So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.
For the past few years Intel has been threatening to make discrete GPUs obsolete with its march towards higher performing integrated GPUs. Given what we know about Iris Pro today, I'd say NVIDIA is fairly safe. The highest performing implementation of NVIDIA's GeForce GT 650M remains appreciably quicker than Iris Pro 5200 on average. Intel does catch up in some areas, but that's by no means the norm. NVIDIA's recently announced GT 750M should increase the margin a bit as well. Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks. OpenCL performance is excellent, which is surprising given how little public attention Intel has given to the standard from a GPU perspective.
Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks.
QFT - with the emphasis on the point that everyone is missing. The rMBP is NOT a traditional gaming notebook.![]()