This is a pro notebook, not a "I need something to update my MySpace" notebook.
So which Pro application are you running that is maxing out the 650M?
This is a pro notebook, not a "I need something to update my MySpace" notebook.
This is a pro notebook
not a "I need something to update my MySpace" notebook.
Correct! Neither it is "OMG Battlefield 3 should be X-frames faster" machine.
So which Pro application are you running that is maxing out the 650M?
So you would pay over two grand (we're going by dollars here not euros so over in Spain it would be even more) for a quad-core processor and just integrated graphics? For me, no thanks.
Except BF3 performance and FCPX performance are related.
Do you think when it comes to Final Cut Pro X the Iris Pro is suddenly going to turn around and become an all star performer?
It isn't a regression - its a huge improvement over the HD4000, which is what most rMBPs today run most of the time. That's a day to day kind of improvement.
Its only a regression if you're intent on heavy gaming on a non optimized gaming platform. That's just not what most people use MBPs for.![]()
Now that's confusing.
As fare as I'm aware, in PowerVR it is hardware implementation.
This first step is mainly carried out by the main CPU. The CPU runs a game written using D3D or OpenGL. The output of those games is standardized and is based on triangles. This output is placed in a buffer, the scene buffer. For PVRSG this buffer has to be big enough to contain all triangles of a complete scene (one frame or image). For traditional renderer this buffer can be smaller. This scene building is done triangle per triangle until the whole scene is finished.
It's pretty close in Crysis Warhead
Also, in case it's running on Intel gpu, it uses some special effects, that are not possible on AMD/Nvidia hardware, so direct comparison is simply not feasible. But it shows what the architecture is capable off.
However, are you saying that gpu manufacturers actually keep an eye on what is out there and adjust their drivers accordingly? So true! Then you also should know that game developers do the same- they do tweaks with the underlying architecture in mind. Now it becomes interesting- looking at the HD 4000 scores, do you think the iris predecessor was worthy target for mid-high settings optimization?
And before you say that's with 55wat part- it's still far less than discrete gpu and gddr5 memory.
Intel acquiring nVidia? Huh?
Of course. Because you quoted it out of context.
Number of processors = I meant shader processing units.
Number of execution units = could be anything. Hell, some GPUs have dedicated h.264 decoding and encoding units. Those don't do anything to benchmarks... though they do allow you to offload video decoding/encoding from CPU.
And you need to read more:
http://www.beyond3d.com/content/articles/38/1
And Imagination Tech's take on the matter, regarding PowerVR Series5XT:PowerVR rendering architecture (1996): The rasterizer consisted of a 32×32 tile into which polygons were rasterized across the image across multiple pixels in parallel. On early PC versions, tiling was performed in the display driver running on the CPU. In the application of the Dreamcast console, tiling was performed by a piece of hardware.
inherent benefits of the hardware based deferred rendering of our PowerVR TBDR architecture.
And synthetics reflect actual performance? Hmm... nope.
If you want big numbers, sure. But there are people who want real results.
and likely would increase power consumption or heat under day to day desktop use when the dGPU does not kick in.
I hope Apple will give us GTX 765M plus IGZO display. Holla holla take my dolla!
And Imagination Tech's take on the matter, regarding PowerVR Series5XT:
http://withimagination.imgtec.com/i...wervr-tbdr-and-architecture-efficiency-part-4
Games indicate development effort more than anything else. 3DMark benches are made to actually reveal what hardware is capable off- it's not the perfect metric, but yes, it matters.
More silicon and lower clocks simply equals lower power consumption. Just like in the new Air. I guess it depends how the turbo will be implemented. L4 also consumes some power, but also helps with CPU performance, so let's not jump to conclusions.
Without regards to the current discussion and to answer OP's question: No, short and simple. Otherwise just call it MacBook and be done with it.
The same sort of apps that are leading Apple to put two high end GPUs in the Mac Pro.
Are you seriously suggesting that there are no pro apps which max out the 650m?
Anything to write home about? It is really close which means you can run almost exactly the same settings in most games as on a 650M. You won't be able to really tell the difference between 20% while doing anything with it. If you read reviews of new GPUs, that is the kind of difference that was quite usual between AMD and Nvidia on different games. That why performance was averaged and nobody said oh complete fail it is soo much slower. You are really being applying different standards.Also on a side note, the only times when Iris Pro were able to catch up to 650M or surpass GT 640M were when they increased the TDP to 55W. At 47W stock, it isn't really anything to write home about.
What I tried to point out is that moving memory accesses closer saves power over wide far away access.Tile-based rendering actually uses the CPU for the first part of the composition process to split tiles, so in essence, CPU cache is used as a buffer for the rendering.
https://en.wikipedia.org/wiki/Tiled_rendering
The technique does indeed make better use of low-latency low-bandwidth memory. But what do smartphones have to do with Apple using Iris Pro? They aren't trying to run mobile applications on their computers...
One could also say that dedicated is a GPU that has its own memory access. Architecturally that is what makes the difference. Whether it would be on die or not.Nope. If they "integrate" GDDR5 memory into the die, it'll just be called on-chip memory or embedded memory. Much like what the Xbox 360 has. (10MB of eDRAM)
"Dedicated" means the GPU is on its own... outside of the CPU.
And yeah, that's the only distinction. Otherwise, iGPU and dGPU are treated the same. The only reason why iGPUs have been treated like third-rate performers is because they are always slower than the dGPUs of the time.
Just because stuff is on die, doesn't make its power consumption disappear. A big die size is even easier to cool but the GDDR5 the high clock rate memory controller will remain to be an bad choice for power consumption.Huge die size or small die size makes no difference as long as the thermal profile is reasonable and the heatsink design is good enough.
Seriously, take an Intel Pentium 4 and compare it to the die size of an Iris Pro and then tell me if adding GDDR5 would be terrible efficiency.
I have about 20C difference at the same fan speed between the two, doing nothing. A dGPU even if it does nothing needs a lot more power than the iGPU in active mode, because it has to power its own memory controller, memory chips, lots of low clocked but not completely power gated transistors.dGPU is forced on because all external display connectors are routed to the dGPU. But there's no extra heat or fan noise compared to integrated because the dGPU is barely stressed playing videos and browsing. It's only when you start playing a video game that the fans start to rev up.
Seriously, the dGPU doesn't have to run at full speed all the time.
You are not reading properly. I did remark on the topic of lowered performance at higher settings. (it is not just higher resolutions but also settings, we don't have any benchmarks yet of higher resolutions without higher settings. I play MW usually on rather low settings but native res).You must be joking...
Because in Anand's benchmark, the only benchmark where Iris Pro actually matches the 650M is Grid 2.
Here's Battlefield 3:
And where did you find these benchmarks.Really?
Benchmarks say otherwise:
3DMVantage GPU score:
DDR3 29671
GDDR5 35334
Difference +19%
3DM11 GPU score:
DDR3 2145
GDDR5 2156
Difference +0.5%
Heaven 2.5
DDR3 750
GDDR5 777
Difference +3.6%
Games:
Street Fighter 4
DDR3 136fps
GDDR5 163fps
Difference +19.4%
Resident Evil 5
DDR3 66.5fps (weaker CPU bound)
GDDR5 121.4fps
Difference +83%
Lost Planet 2
DDR3 26.6fps
GDDR5 31.8fps
Difference +19.5%
20% performance drop is kind of a big deal, considering that's actually close to the difference between 650M and HD 5200.
In fact, Resident Evil 5 showed a whopping 83% difference, suggesting that the game made heavy use of texture streaming, and DDR3 couldn't cope.
You shouldn't overestimate it either. The difference between a HD 5200 and a 650M isn't big at all. And clearly the 650M can still keep up with just DDR3 in almost any benchmark without being starved out. You also mentioned in many posts that 128MB is too little while it actually is more than big enough. It is just a cache and a cache doesn't have to hold only enough data to reach a hit rate of some 90%. It is not comparable to a dedicated big VRAM size. The DDR3 memory is in latency still closer than GDDR5 dedicated memory.I suspect that a part of the difference between HD 5200 and 650M in Anand's benchmarks is also due to this reason.
You shouldn't underestimate memory bandwidth. Especially not for higher resolutions.
At that time all they had was the GMA stuff. Atom was a new product and they didn't have many parts down. Merryfield will be the first actual product where they think they have all parts down. The older Smartphone Atoms where just a first try where they put their core in and stuff the rest from thrid parties in. It is not just the GPU that isn't from Intel in there and integration still sucked. It is just pointless to argue about that. Now they put the Gen7 graphics and and when it is out one can argue about where Intel graphics are.Atoms came with Intel's GPUs as well. But it was painfully obvious that Intel just sucked at producing low-power GPUs for their CPUs, so the really low power Atom chips had to use PowerVR GPUs.
We have very different ideas of considerable. IMO considerable is when you don't need a benchmark to notice a big difference or when a small setting tweak doesn't fix an annoying problem.At 47W, it was consistently behind GT 640M. Again, see charts up there.
Indeed. It is weird how people can be so fussed up about some 0-40% difference when a mainstream desktop GPU is far away and iGPU used to be less than a third the performance.throAU said:All laptop GPUs are crap (relatively speaking). If you want maximum OpenGL or OpenCL performance, use a desktop. It's not the designer's fault - there's only so much you can achieve in 45w or less.
Anything to write home about? It is really close which means you can run almost exactly the same settings in most games as on a 650M. You won't be able to really tell the difference between 20% while doing anything with it. If you read reviews of new GPUs, that is the kind of difference that was quite usual between AMD and Nvidia on different games. That why performance was averaged and nobody said oh complete fail it is soo much slower. You are really being applying different standards.
The question is, is whether there is anything that you cannot do anymore because of the regression in performance. Than it was nothing to write home about. Here it mostly comes down to good drivers and that Iris Pro will be quite capable of handling anything people used to throw at the 650M.
What I tried to point out is that moving memory accesses closer saves power over wide far away access.
One could also say that dedicated is a GPU that has its own memory access. Architecturally that is what makes the difference. Whether it would be on die or not.
Intel couldn't use GDDR5 because it sucks for the Cache nature which they use. Xbox uses SRAM which has very low latency. DRAM is in between. GDDR5 is just bad in that respect. It is much better to just made a very wide access as they did with 50GB/s DRAM. AMD/Nvidia had their share of problems to get GDDR5 memory controllers power efficient. Intel putting that power hog into the die would just not work out for power efficiency. GDDR5 isn't worth it.
You are not reading properly. I did remark on the topic of lowered performance at higher settings. (it is not just higher resolutions but also settings, we don't have any benchmarks yet of higher resolutions without higher settings. I play MW usually on rather low settings but native res).
And in all the other benchmarks the difference between Iris Pro and 650M in lower settings doesn't really grow on the higher settings. The difference appears to be relatively the same. Except for Battlefield and there it could be all kinds of things. It could be some feature that is turned on at high settings that the Intel GPU doesn't cope well and requires some driver tuning. These benchmarks don't suggest a data starved GPU.
And where did you find these benchmarks.
I very much doubt that memory performance is responsible for the difference between performance of the Iris Pro and 650M. If you actually compare the DDR3 and GDDR5 650M on notebookcheck benchmarks they are within a very tight range of on average maybe 5%. There are clock speed differences but if a 650M @ 835Mhz would be data starved in Battlefield 3 why does it yield very similar performance on playable frame rates.
Battlefield doesn't appear to have memory issues. Keep in mind that the Iris Pro still has way more bandwidth available than a 650M with DDR3.
I doubt very much Battlefield 3 problems have anything to do with memory access.
You also mentioned in many posts that 128MB is too little while it actually is more than big enough. It is just a cache and a cache doesn't have to hold only enough data to reach a hit rate of some 90%.
We have very different ideas of considerable. IMO considerable is when you don't need a benchmark to notice a big difference or when a small setting tweak doesn't fix an annoying problem.
Indeed. It is weird how people can be so fussed up about some 0-40% difference when a mainstream desktop GPU is far away and iGPU used to be less than a third the performance.
Indeed it is!
Image
Correct! Neither it is "OMG Battlefield 3 should be X-frames faster" machine.
And why it should be, when none of benched games (except Grid 2, which is faster on Intel hardware, with added special effects, but that's a glitch, 'cause it's not in line with "It's simply not capable enough") are optimized with Intel's graphics. Why? Because the most powerful up until recently HD 4000 is 40% the performance of HD 5200. Why would they bother?
in this tasks...
Im 100% sure that some benchmarks were "tinkered" to show better score than normally...
I agree with people that discrete cards are not just for gaming but still I want something solid. I care about power efficiency, heat, and everything else though dGPUs are improving just as iGPUs are improving. I mean you're telling me that with Maxwell and beyond, that Apple isn't going to take advantage? Why not? Intel I certainly give credit to where it's deserved. They aren't at the gold level yet. To me they're fighting for the silver but if the Iris Pro works well enough, the bronze medal is surely theirs. That probably doesn't make much sense but oh well.
Which of course are hardly relevant to OS X platform.
I mean, OpenCL?!
It's not like Apple invented the freakin' thing, and it's sure as hell not like they will put some pressure on developers to use it more? Has anybody heard Phil Schiller, at this year's WWDC said something like, hmm... " You all know you should use it" ?! 'Cause I didn't! Or maybe he did, but he meant something else?
So why bother giving them (developers, developers, developers) a compelling mobile platform to target for gpu accelerated work (great compute-oriented gpu's in all of Apple's notebook lineup, top to bottom) ?
![]()