GT3 Versus 650m

Discussion in 'MacBook Pro' started by luffytubby, Jan 18, 2013.

  1. luffytubby macrumors 6502a

    luffytubby

    Joined:
    Jan 22, 2008
    #1
    http://www.youtube.com/watch?feature=player_embedded&v=VPrAKm7WtRk



    Pretty impressive? Perhaps a bit too impressive? Maybe this game is more CPU bound, maybe this sort of integrated graphics only go into 35w laptops. I don't know, but if this is true, it has to have almost 3x the performance of the current Ivy Bridge IGPU! That seems extreme.


    more at anandtech; http://www.anandtech.com/show/6600/...rformance-compared-to-nvidias-geforce-gt-650m



    What do you think this means for 2013 macs with Hashwell CPUs?
     
  2. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #2
    It is actually much less impressive than it sounds.

    The settings they used are a piece of cake for the 650M. It probably runs them on some 50 fps but as long as the GT3e manages some 25fps it looks good enough in video while actually being only half as fast.

    The HD 4000 actually delivers some 21fps at medium settings no AA 1080p in that game. Since those settings are medium high with shadows (the most demanding) turned down it needs only about double performance to reach fluid fps easily.
    The 650M may still be way ahead but you won't see that without them actually showing the real fps.

    What they did show is that the GT3e is good enough for quite a lot of gaming even at higher settings. They didn't show that it is the equal of an 650M.
    Where it may very well beat the 650M is in power efficiency. Running on 22nm with reduced memory subsystem power consumption compared to a 650M it ends up being extremely efficient. The hiring effort Intel started some 5 years ago in the GPU department does pay of now.
     
  3. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #3
    I don't think it means much. 60fps and 120fps both look that smooth when you output to a 60Hz monitor.

    And I think it's very obvious why Intel isn't letting tech reporters have a go at analyzing fps logs and benchmarks on GT3.
     
  4. strwrsfrk, Jan 18, 2013
    Last edited: Jan 18, 2013

    strwrsfrk macrumors regular

    Joined:
    Mar 1, 2011
    Location:
    Arlington, VA, USA
    #4
    It's certainly impressive to a point. The leaps and bounds made by Intel in the integrated graphics arena are incredible - HD4000 graphics are already a bit better than the 9400m chips in the 2009 13" MacBook Pros, and the jump to GT3-quality HD4600 graphics will likely be on par with the 9600m GT dedicated chips found in 2009 15" MacBook Pros.

    The skepticism regarding these video "benchmarks" with no real numbers behind them are indeed warranted. Without a suite of apples-to-apples comparisons, it's too hard to say how close the GT3 series will be to the 650m.

    That being said, the era of Intel integrated solutions being useless or "not good enough" are over. The solutions available are more than capable of powering the retina-quality displays out now, and that will only get more true with each generation and as Apple drivers mature. With Haswell, we will see integrated GPU performance (in the 13" MacBook Pro, at least; it would be awesome if the chips for the MacBook Airs had the same) on par with dedicated GPU performance from 3-4 years ago (on a far more efficient architecture). And that is definitely impressive
     
  5. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #5
    Uh... that's certainly not true. The 9600M GT still edges out HD 4000 by a good margin.

    http://www.notebookcheck.net/NVIDIA-GeForce-9600M-GT.9449.0.html

    http://www.notebookcheck.net/Intel-HD-Graphics-4000.69168.0.html

    Going by benchmarks alone, the 9600M GT is still consistently 50-100% faster than Intel HD 4000... at everything.

    Intel HD 4000 is actually not even comparable to the old GeForce 320M used in 2010 Macs in some cases.

    I know because I have used Macs both with the HD 4000 and GeForce 320M, and it's clear to me that Intel is still not quite there yet.

    Going by the above, even if HD 4600 is twice as fast as HD 4000, it'll still likely match just the 9600M GT in... 2009 Macs (4 years old). There's pretty much no chance of it even catching up to Radeon 6490M or GeForce GT 330M.

    It's actually an educated guess and not skepticism. If you're familiar with how past Intel hardwares benchmark and perform compared to how Intel "claims" they'll perform, then it's clear why some are raising an eyebrow at this "comparison".

    Check above to see why this is not true.

    Not to be a downer or anything but... this is Intel we are talking about. They have never ever been able to deliver a single good integrated graphics solution to the market. Ever. I don't see how the trend will start now since they are resorting to dubious tactics ("inventing" SDP, for one) just to make Haswell look better than it actually is.
     
  6. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #6
    It will surely surpass the 330M as the HD 4000 are already equal, I doubt that the HD 4600 should be named "GT3-quality" as it seems those will be GT2 as in only half the execution units (20 EU). It is more like GT2+ most likely.
    I doubt the quad cores come with GT3 this time around. This will come with 14nm and Broadwell. Given the name 4600 and the preview 2500 and so on generations I think it is wishful thinking to expect a GT3 hiding behind that codename.

    GT3e and GT3 no 'e' will be in Dual Core chips and not in a single Quad Core.
     
  7. strwrsfrk macrumors regular

    Joined:
    Mar 1, 2011
    Location:
    Arlington, VA, USA
    #7
    Ah, you're right. I was mistaken and looking at incorrect benchmarks. My other post will be edited. The optimist in me has to point out that it does appear that the discrepancies between the 320m and the HD4000 (though in the 320m's favor) are only about 5% off. Then again, there's not as much 320m data available and I'm extrapolating.


    However, I still disagree with this. Dubious tactics or no, the fact of the matter is Intel integrated solutions are improving at a rapid rate, and are doing so with a close eye on efficiency. Remember, you're comparing in integrated solution to a dedicated one, even if it is a 4-year-old dedicated chip. The next generation of mobile processors will squeeze that power - previously reserved for 15" models - into a 13" chassis. It's still a massive improvement, and is only getting better. You may not be playing Crysis 3 on high, but you're still harnessing power which is more-than-capable of powering retina content at a pleasant clip.
     
  8. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #8
    You shouldn't be so quick to assume your own mistake.
    Bill-p actually compares the 9600M 3dMark Vantage benchmarks to the HD 4000's 3dmark 11 scores.

    In comparable vantage benches the 9600 is 1.4k (P) and 1.1k
    while the HD 4000 3.1K and 2.5k in the same benchmarks.

    That is not a GPU that needs to be twice as fast just to keep up. Drivers are still not quite the same and Intel especially looses on some higher detail settings but in actual gaming benchmarks it is usually about equal to slightly faster. Some games are outliers but that is most likely a driver issue or power savings thing as Intel never seems to care to deliver 200 fps.

    If one actually compares the appropriate scores and accounts for the power consumption of the GPU Intel is already on par with the HD 4000.
    The Intel bashing in the GPU department only still works when one ignores the facts. Drivers could still be better but that is about it. The hardware makes up with better fabrication (22nm) all remaining possible advantages of nvidia or amd.
     
  9. bill-p, Jan 18, 2013
    Last edited: Jan 18, 2013

    bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #9
    Uh... no. Look at 3DMark 06 and below benchmarks (Shader 3.0 and below). The 9600M GT doesn't even support DX 11, so there was no way to run 3DMark 11 on it. It also suffers with Vantage because the GPU wasn't made with DirectX 10 in mind. It was mostly a Shader 3.0 GPU that also happened to support DX10 thanks to having unified shader. But that's where things stopped. But you can see clearly that it's faster than HD 4000 on average with 3DMark 06.

    Looking at things more realistically, there are not a lot of game titles that support DX10. Many of them still run on DX9 (Shader 3.0), and has some enhancements with DX11. That being the case, the GeForce 9600M GT with more DX9 performance still makes more sense.

    And you should read actual game benchmarks in there as well:

    Call of Duty 4 Modern Warfare:

    HD 4000 - Low Avg 100.6fps - Medium Avg 43.3fps
    GeForce 9600M GT - Low Avg 133fps - Medium Avg 45fps

    World of Warcraft:

    HD 4000 - Low Avg 89fps - Medium Avg 74fps
    GeForce 9600M GT - Low Avg 250fps - Medium Avg 83fps

    At worst, the HD 4000 is about on par with 9600M GT. But it doesn't surpass 9600M GT, nor does it reach 330M GT performance.

    No, I'm not bashing Intel. I'm just stating facts. Here's more 330M GT vs HD 4000:

    Star Craft 2:

    HD 4000 - Low Avg 115fps - Medium Avg 25fps - High Avg - 16fps
    330M GT - Low Avg 151fps - Medium Avg 40fps - High Avg - 27fps

    Mafia 2:

    HD 4000 - Low Avg 32fps - Medium Avg 26fps - High Avg - 21fps
    330M GT - Low Avg 44fps - Medium Avg 36fps - High Avg - 30fps

    Dirt 2:

    HD 4000 - Low Avg 33fps - Medium Avg 26fps
    330M GT - Low Avg 76fps - Medium Avg 44fps

    World of Warcraft

    HD 4000 - Low Avg 89fps - Medium Avg 74fps
    330M GT - Low Avg 208fps - Medium Avg 157fps

    Source:
    http://www.notebookcheck.net/Intel-HD-Graphics-4000.69168.0.html
    http://www.notebookcheck.net/NVIDIA-GeForce-GT-330M.22437.0.html

    And I still can't see how you can say HD 4000 is "equal" to 330M GT. If anything, the HD 4000 does benchmark well... in the case of 3DMark Vantage and 11. But those benchmarks don't really say much, if anything at all since most games are still built on DX9.

    And then when we consider OpenGL, which Mac OSX uses, let's be honest: Intel sucks at delivering good OpenGL performance with their drivers. Even if the GPU is capable, it still wouldn't be able to do much if the drivers aren't capable.

    And the GPU isn't actually capable. I mean... just look at those numbers.
     
  10. strwrsfrk macrumors regular

    Joined:
    Mar 1, 2011
    Location:
    Arlington, VA, USA
    #10
    Doesn't Apple write its own drivers for GPUs? I could be wrong, but if I'm not, then you can't blame Intel (or Nvidia or AMD) for poor driver performance on Mac OS X.

    And here is the crux of my argument: The definition of "Capable." Yes, you won't be playing modern AAA games at full resolution with maxed settings on your MacBook Pros with an integrated graphics solution. But why would the power of a GPU considered good for gaming in 2009 suddenly be considered "not capable" in an integrated solution 3 1/2 years later? That's just bizarre to me. Again, especially since you're considering the reduction in the form factor (15" to 13") and other efficiency considerations. It's like everyone just knee-jerk says "Intel graphics suck" as if there's a vacuum; consider the X3100 or GMA950 IGPs - both of which can handle basic computer tasks at 1080p - and how far Intel has come since then.

    Anyway, the numbers posted above aren't all that terrible results for gaming. Many AAA titles from 2010 on might cause some problems, but many popular games (Source engine, Torchlight, a slew of Indie titles, etc) are perfectly smooth with an HD4000 and can only get better with the HD4600. It's still an integrated solution which cannot compete with a $120+ dedicated GPU, sure, but it's a darn reasonable, "capable" CPU for just about any consumer task.
     
  11. bill-p, Jan 18, 2013
    Last edited: Jan 18, 2013

    bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #11
    No. Manufacturers are requested by Apple to write drivers for OS X. So the blame clearly falls onto Intel, nVidia or ATI.

    Again, I'm not saying it sucks.

    But I'm saying that it's absurd to say HD 4000 is equal to this (9600M GT) and to that (330M GT) GPU when performance numbers clearly say it isn't.

    And then the same thing holds true for HD 4600. It's absurd to compare it to 650M when clearly they aren't equal.

    ----

    Edit: but on the topic of "not capable", I think the more valid comparison is to compare HD 4000 against GeForce 320M.

    And here are the numbers:

    Star Craft 2:
    HD 4000 - Low Avg 115fps - Medium Avg 25fps - High Avg 16fps
    320M - Low Avg 100fps - Medium Avg 26fps - High Avg 15fps

    Mafia 2:
    HD 4000 - Low Avg 32fps - Medium Avg 26fps - High Avg 21fps
    320M - Low Avg 28.4fps - Medium Avg 23.4fps - High Avg 19.3fps

    Dirt 2:
    HD 4000 - Low Avg 33fps - Medium Avg 26fps
    320M - Low Avg 40fps - Medium Avg 28fps

    World of Warcraft:
    HD 4000 - Low Avg 89fps - Medium Avg 74fps
    320M - Low Avg 132fps - Medium Avg 104fps

    Call of Duty 4 Modern Warfare:
    HD 4000 - Low Avg 100.6fps - Medium Avg 43.3fps
    320M - Low Avg 112fps - Medium Avg 41fps

    Source:
    http://www.notebookcheck.net/NVIDIA-GeForce-320M.28701.0.html

    So as it turns out, Intel is barely delivering an integrated solution that's "slightly" faster than what nVidia delivered for Apple... 2 years ago.

    That means integrated performance in Apple's computers has pretty much stayed about the same for 2 years. If you're saying HD 4000 is "capable", then you'll have to say the same for GeForce 320M as well. And then it turns out we have always had integrated GPUs that are capable of driving these high-resolution displays all along. If only Intel would allow third-parties to make chipsets and integrated graphics for their CPUs.
     
  12. strwrsfrk macrumors regular

    Joined:
    Mar 1, 2011
    Location:
    Arlington, VA, USA
    #12
    Sure. Correcting misconceptions regarding performance is always a good idea. Accurate information is key to a quality discussion.

    The "capable" discussion can get tricky. But I would say that the 320m is a capable integrated graphics solution. I have no problem with it. My problem is two-fold:

    1) Holding an integrated solution to unrealistically high expectations - in particular, high-end gaming - and claiming that they are not useful or good enough for almost every other task is blatantly inaccurate. Maybe that's not what you were saying, but a lot of people have that knee-jerk reaction, and it's baffling. The GPU is pushing those pixels pretty well.

    2) People seem to disregard every aspect of progress with regards to the GPU other than theoretical framerate performance. This approach does a disservice to some of the other very exciting developments; I think it's awesome that the graphics horsepower of a 15" MacBook Pro from 2009 is the minimum amount of power we can expect in the 13" MacBook Pro (or, hopefully, even on the 11" or 13" MacBook Air) and that the thermals are so vastly improved. The package is so much smaller, which allows thinner laptops with better battery life, doing the same heavy-lifting that could be expected of a much larger, heavier, hotter laptop from a few years ago. And that's freaking awesome, and that's progress.
     
  13. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #13
    But that's exactly what Intel is trying to achieve by comparing the HD 4600 (GT3) to GeForce GT 650M. They started it. Nobody is setting any expectation for HD 4000 until people saw Intel compare HD 4600, and then they started to draw conclusions on HD 4000. But gosh, at the end of the day, it's just an integrated GPU. It's not equal to GeForce GT 330M. In fact, it's not even equal to GeForce GT 9600M.

    Except that the same performance and thermal envelope was already done 2 years ago with the GeForce 320M.

    So there has been very minimal (if any at all) progress since 2010.

    Looking at it another way, if nVidia was able to create a successor to the 320M with better manufacturing process (HD 4000 is 22nm, 320M is 40nm), then there's no doubt that we would have had much better integrated graphics performance by now.

    So the ball is all in Intel's park.
     
  14. luffytubby thread starter macrumors 6502a

    luffytubby

    Joined:
    Jan 22, 2008
    #14
    But do any of you imagine that GT3 will be in Macbook Air 11 and 13? that would basically mean that they would be strong enough to run Crysis, wouldnt it?
     
  15. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #15
    It will be, but not at the performance level Intel is showing. They'll likely scale it back to reduce power consumption on ULV chips, which the MBA 11" and 13" may get.

    On the other hand, if you're asking about Crysis, then sure, it can run Crysis. But pretty much anything can run Crysis nowadays. Even the almost-3-year-old GeForce 320M that was in the 2010 MacBook Air could play Crysis.
     
  16. strwrsfrk macrumors regular

    Joined:
    Mar 1, 2011
    Location:
    Arlington, VA, USA
    #16
    You make some good points, but I think on others you've missed the boat. Yes, Intel invites these comparisons. But it's a tech demo, and everyone should take that with a grain of salt. Of course the top-tier HD4600 is not equal to a 650m, and no one should make that mistake. The demo itself serves its marketing purpose inasmuch as it shows a moderately demanding game playing on a common higher-end GPU and its integrated solution well. We're nerds; we debate every little thing ad nauseam. But for most people, the point is that the HD4600 can play with the bigger kids, even if just a bit and in a controlled test.

    And the TDP of the 320M was 20W, which is 3W higher than the top-end i7 currently in the MBA (throw in the 10W of the base 1.4GHz Core 2 Duo and you're at 30W for the whole package). If they're cramming the same capabilities of a 20W GPU from 2010 into a sub-20W package that includes the (substantially better) CPU as well, that is progress about which I can get excited.

    Would it be awesome if Intel integrated a solution by Nvidia instead of its own? Yes. With regards to graphics, I'm a major Nvidia fanboy. But Intel is making steps in the right direction, and their offerings are not all that disgraceful, all things considered. And if they increase every generation's top-tier graphics capabilities by 50% every generation (unlikely, but we can hope), then there will be some crazy powerful CPU/GPU combos in the next few years.
     
  17. bill-p, Jan 18, 2013
    Last edited: Jan 18, 2013

    bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #17
    Uh... the 320M is not 20W. It's integrated into the chipset, and the whole 320M chipset consumes about the same amount of power as the good ol' GeForce 9400M chipset, which was about 12W.

    So the whole package is actually just 22W when you couple it with the Core 2 Duo SU9400 at 10W.

    And the whole package of Intel HD 4000 is actually not sub-20W because you still need to take into account the chipset. Intel has not built the chipset into the CPU yet. The external 7-series chipset for the Ivy Bridge series of processors has a TDP of approximately 4W, so in the end, you're still looking at roughly 21W for the whole package.

    As such, it's not that much better in terms of power consumption. In fact, it may be worse when Turbo Boost kicks in. CPU performance is better, though, I'll give you that.

    They're improving, sure, but I don't think they are taking any step in the right direction.

    For one, they don't have to pull this "stunt" (GT3 vs GT 650M) just to make HD 4600 look better than it actually is.

    And for another, they don't have to "invent" a new metric (SDP) to make it look like they significantly reduced power consumption in Haswell when in reality, SDP is just measuring power consumption indirectly via thermal by inducing a lighter load on the processor. TDP typically has been measured by inducing a higher load, so TDP numbers are always going to look higher.

    Honestly, I'm not a fan of anything but good tech. And Intel hasn't shown good tech for the past 2 years. They aren't doing any better now. Smokescreens and false comparisons are usually bad signs.
     
  18. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #18
    Which is a long shot from a 50-100% difference.
    Checking all the benchmarks they have the two GPUs can be considered to be pretty much on par. In raw numbers the HD 4000 is actually quite a bit faster. The 9600M is a 120GFLOPS processor. The Intel can push 250 gflops at peak. It may have ****** drivers, no optimizations form the gaming engine dudes and quite a different architecture but it is far from what some here make it out to be. The Xbox 360 has a worse GPU and that is used as a gaming console by many and they do not complain about terrible performance or graphics like the

    I was too practical there. In my experience especially newer titles only run at low settings if at all on a 330M. There is pretty much no difference in actual use one gets out of the 330M opposed to the 4000. By default Apple also ran it at 500 Mhz which is below the standard.
    Intel optimized the GPU for low settings where it usually performs adequate. It breaks of harder at medium or high but at least you can run the games somewhat decently at low. The 330M (which I have) doesn't do so well there. Some newer games are just pathetic no matter the setting.
    You are right they aren't equal. Add the 4 more EUs and some optimizations I think the HD 4600 might get close. The GT3e will definitely deliver more.

    In Windows Intel sucks. In OSX Apple delivers the Open GL drivers and as I recall they put out 30% better numbers than in Windows. Which actually made the difference to the 320M vs the hd 3000. The HD 3000 could handle the 320M at low settings and did much better than in Windows benchmarks. The nvidia OpenGL was quite poor too as it seemed.


    The HD 4600 is I think with pretty much 90% certainty not the GT3e. That one will probably be called HD 5000 or something else entirely.

    When looking at the whole 320M SU 9400 debate one should keep in mind memory bandwidth. The HD 4000 already loves faster RAM. The AMD trinity of the current gen are quite poor without fast DDR3 memory. Those Su 9400 didn't need as much bandwidth as what we have today on the CPU side. These leaves any IGP starved as DDR3 speeds didn't really grow all that much.
    That accounted for I think the performance they press out of those IGPs is quite good.
    Nvidia couldn't do too much better without fixing that same problem.

    On the other hand that makes for a big power efficiency benefit. A 128bit memory access of a dedicated GPU needs quite some power especially in GDDR5.

    SDP simply spells out as scenario design power which they actually had for the last Ivy Bridge ULVs too. The only confusing thing was how the rumor mill press took it in and Intel's marketing not being particularly brilliant in explaining. The IB also allowed a SDP of 13W and something higher. Nothing else is this SDP. No new invention only a new name because apparently no manufacturer actually used it. They simply promoted a feature more that was already available under cTDP in the other IB.
    Intel has virtually destroyed the competition. The CPUs are incredible with good performance that is actually of use. Some really good features that aren't used much unfortunately (Quick Sync). GPUs that are power efficient and more than good enough for most things. Gaming class was never really the game plan. It has always been a, it would be nice if that worked too. In the end they have about 4 different dies that they put in all kinds of TDP bins and sell them. For 90% of the users gaming is not the primary problem and everything else works quite well, like slim, low noise notebooks with more than enough capability in multimedia and office work.

    If you compare the GPUs when the X3100 was around and today there is a huge difference. And keep in mind they always only competed in the 10-15W TDP class and not the 30W+. The truth is any low end GPUs simply aren't worth it anymore. Either one goes mainstream like 640M at least of forgets about it today. You don't see a 520M or 7470 in any notebook anymore because they are just not worth it anymore. Wasting 15-20W and the space for chips and cooling when it barely gets you anything. At that level the dGPU just aren't worth it anymore. IGP save on the memory system, weight and space.
     
  19. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #19
    Look at the numbers for WoW, Dirt 2, and StarCraft 2. Those are 50-100% different.

    Especially where WoW is concerned, that's where the CPU gets bogged to death and most likely, the HD 4000 benchmarks were where the CPU was prevented from Turbo Boost. Since HD 4000 requires Turbo in order to get past its nominal operating frequencies, I'm not surprised to see WoW having bad performance with Intel graphics.

    Quoted GFLOPS is never the right way to contend for the performance of a system. If you want to quote GFLOPS, the PS3 would be way ahead of the 360. But yet it doesn't even come close.

    And the 360 actually has quite a lot of memory bandwidth due to having 10MB of eDRAM embedded onto the GPU. That essentially gives it "free" AA, "free" blur/depth of field filters, and a lot of other things. That's why the 360 gets the upper hand.

    Even if HD 4000 can get 1000 GFLOPS, it'd still be limited by memory bandwidth since it has to leech off of system RAM. Meanwhile, the GT 330M enjoys more RAM bandwidth with GDDR3.

    I'm not sure what titles you are running with your GT 330M. But feel free to post frame rate or benchmark scores, and I'll post back my scores from using the HD 4000 to compare.

    I'm confident the 330M GT will be consistently faster because that's what I have seen. Yes, it holds true even for newer games. If the HD 4000 matches the 330M at anything, that's when the CPU becomes the limiting factor and not the GPU.

    I run OpenGL applications very often under both Windows and OSX. Specifically, I run Dolphin (Wii/GameCube emulator that's cross-platform), and Windows is always faster than OSX at OpenGL. No exception.

    Same thing for AutoCAD and Maya.

    Uhh... no. SDP wasn't there for "last" Ivy Bridge.

    If you mean Ivy Bridge Y, those aren't on the market yet:
    http://ark.intel.com/products/72013/Intel-Core-i5-3339Y-Processor-3M-Cache-up-to-2_00-GHz

    Here's the actual "last" Ivy Bridge:
    http://ark.intel.com/products/65707/Intel-Core-i5-3317U-Processor-3M-Cache-up-to-2_60-GHz

    Intel essentially reduced the clock speeds of their best binned Core i5 and Core i7 ULVs, then measured that they push out 7W of heat at idle, and called it "SDP". Since they couldn't call it "TDP" anymore.

    They literally just "invented" the term in order to make it look like they "innovated" or "improved" thermal profile significantly. But in reality, it's just some better binned chips that run at lower frequencies.

    You don't see those because those are now older generation chips.

    In fact, some 600M GPUs will be dropped because they are based on older tech (Fermi). That being the case, what are the chances you'd see a 500M GPU anymore?

    Same goes for 7470M, which is just a renamed 6490M.
     
  20. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #20
    Search for cTDP up or cTDP down. Manufactures could set the TDP of the 17W U Series CPUs to 13W and also up them to 20+ something. That is exactly the same thing as what they call SDP now. Only difference is that they gave it a new name rather than custom TDP (up/down).
    It does the same thing. They lower TDP which only works via clocks. It is still the same chip with the same name for the same money.

    BTW all chips are binned there are only 4 different actual DIEs that leave the plants. Everything else is binning and locking multipliers.

    Sure GFLOPS does not tell much in actual performance but it does show how much difference there is with older chips. Those aren't little afterthought graphics anymore and vastly subpar as they used to be. It is not like one compares a processing monster to a poor attempt at a bit of GPU performance. These IGPs push quite some performance.

    My current games I tried Civ V in OSX which doesn't work at any setting. I don't game much anymore because I gave up on anything newer like Bad Company which isn't really playable at even low settings. MW3 works without AA, no smoke edges, high textures. I tried some newer Total war and gave up on it.
    Otherwise my gaming is revolt (which is 16bit), CSS and C&C Generals. Newer games just drive me crazy. I would like to play ARMA 2 or Assassins Creed and none would even yield the bare minimum to be playable. It is effectively an IGP by modern standards. All the games I can play work well on the HD 4000 too as far as I know and on the rest you can forget either one.
    I don't have a single game in OSX now and aside from my MW3 settings in Windows I cannot remember anything.
    It is no soft edges, no aa, high textures, native 1680x1050, no spec map, basically everything turn off so that I get native res. which I prefer over higher details and AA. Yields me 30-40 fps. Indoor sometimes above 40. Some scenes below 20.
    C&C Generals runs at all high, CSS too. Total war the Imperial one (pre napoleon) had such poor texture performance it was full of artifacts. Even at low it was annoying. Water battles didn't much work. Eventually I just stopped it.
    IMO a GPU at this performance level is worth no more than the HD 4000.
    A 620M or 710M is where it starts to make a difference IMO.
    I forgot to mention I usually run my 330M with an over clock of 600 Mhz rather than the usual 500. Makes the difference in Multiplayer where frames usually drop.
    Effectively I run all games at low/med settings and forget new demanding games all together. Starcraft 2 might be the exception as there 25fps should be good enough and the 330M does a lot better at medium/high.

    My initial post was uneducated I grant you that. I extrapolated performance from some older memories and factors of performance. I compared HD 3 to 4000 and guess it would come out where the 330M would be given how this one fared against the older and slower competition. It is still not GPU of any use for actual gaming.
    A GT3e could match it with 40 EUs (probably some more efficiency per EU) if they get enough memory access relieve from the embedded memory.
    I probably won't worry about dedicated GPUs anymore after this current notebook. It doesn't seem worth the weight and size. With Broadwell in 14nm, more stacked memory, and a Quad Core + a strong GPU all on a single SoC. I see even mainstream class GPUs losing their right to live.
    Yes you are right about Open GL. OSX is terrible.
    http://www.phoronix.com/scan.php?page=article&item=intel_sandy_threesome&num=1
    I remember some steam game that anand tested in Open GL on both plattforms were especially the hd 3000 drivers on windows seemed to be poor. Maybe they improved a lot or it was bugs. That phoronix article is quite new.
    And why are they only rebrands. Because Nvidia and AMD both now they don't sell anymore. People either want some more GPU capability or a thin notebook and the market in between isn't really service. These entry level cards simply don't matter anymore. They aren't worth the trouble for most manufactures. There is a handful of notebooks with a 620M out there and that one is faster than my 330M. Intel and AMD killed this low performance market. Nvidia and AMD simply gave up on it. If they finally sell up to date entry hardware 18 months down the road it won't change the picture.

    Today they need a certain performance level to make it worth it and that usually requires some 60% of the CPU it is paired with at least.
     
  21. bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #21
    cTDP was a media spin on what should just be "TDP".

    Essentially, starting with Sandy Bridge, which has Turbo Boost, TDP is no longer the right metric for power consumption or thermal anymore because a processor may kick Turbo in and increase its frequency until its internal thermal sensors detect a high value. So essentially, your 17W TDP is only applicable at the max non-Turbo frequency. For instance, on the MacBook Air, that's 1.7GHz for CPU and 1.1GHz for the GPU.

    And yeah, so the Intel HD 4000 can perform better or worse depending on which CPU it's coupled with. If it's with a CPU that has more cooling headroom, it'll run faster, and vice versa.

    In most MacBooks, it doesn't get that much headroom compared to some Windows laptops... This is especially the case with the MacBook Air. So HD 4000 may actually run much faster on a Windows laptop compared to on a MacBook. Here's a chart to demonstrate that:

    [​IMG]

    [​IMG]

    I meant "binning" as in these chips are "specifically cherry-picked" to reduce frequencies and run at slower clocks.

    Like I stated above, it really depends on many factors. The problem is this: not all HD 4000 are made equal.

    And when it comes down to it, aside from the chips you get in quad-core MacBooks (15" and above... only 15" for 2012), most of the other ones run at slower clock speeds and are under more constraints (thermal, Turbo clocks, etc...), so it's absurd to compare them to 330M GT. Hell, even the ones inside quad-core MacBooks are not guaranteed to beat 330M GT.

    I don't have Civ V, but I have Modern Warfare 3. With the HD 4000 inside my MacBook Air 13", I'm limited to 1280 x 800 with all settings at low except textures. Framerate was between 30 to 40fps.

    With Radeon HD 6490M, I could go 1440 x 900 with all settings maxed and no AA. Framerate was between 40 to 50fps.

    With Radeon HD 6630M (Mac Mini), I could go to 1920 x 1080 without AA and soft edge smoke.

    Also don't have C&C but I have StarCraft 2.

    Medium settings have HD 4000 chug along at 20-30fps.

    Radeon HD 6490M gets 40-60fps at 1440 x 900.

    Radeon HD 6630M gets 50-60fps at 1600 x 900.

    And just for kicks, I compared those with Crysis 2 as well:

    HD 4000 1024 x 600 at lowest settings - 20-25fps

    Radeon HD 6490M at lowest settings 1440 x 900 - 25-40fps

    Radeon HD 6630M at lowest settings 1600 x 900 - 25-40fps

    Colleague's 330M GT (Alienware M11x) at lowest settings 1366 x 768 - 32-50fps

    And needless to say, my rMBP eat all of those for breakfast. Crysis 2 alone gets 35-60fps at 1920 x 1200.

    If Apple exposed HD 4000 on the rMBP to Windows, I'd have gotten some results for you. But sadly, it's dGPU only for Windows.

    I think your MacBook or whatever system you are running may be limited by some other factor. The 330M GT from what I can remember isn't that bad. I mean... look squarely at the Crysis 2 results and tell me if the 330M GT is "equal" to HD 4000 or not. And I don't think Crysis 2 would lose to any modern game in terms of requirements.

    OpenGL has been worse on OSX for a long while. Even now, not all OpenGL 3.0 features are available, and we already have OpenGL 4.0.

    3D performance suffers quite noticeably under OSX as a result. I always have to resort to Windows for gaming or for AutoCAD/Maya even though I much prefer to work under OSX.

    I don't know how the HD 3000 does, since I don't have any computer that I can test that with (the 2011 MBP 15" above doesn't expose HD 3000 under Windows either), but HD 4000 does get better OpenGL performance under Windows compared to OSX.

    520M isn't a rebrand. It's just a very old GPU.

    And now 620M is old as well. nVidia is moving on to Kepler, so all Fermi chips as of this point is old news.

    If you're looking for a low-power GPU, try GeForce GT 640M LE.
    http://www.notebookcheck.net/NVIDIA-GeForce-GT-640M-LE.72199.0.html

    It has the same TDP as the 520M, but it has much better performance.

    As for ATI's rebranding strategy, I have no idea. It may just be that they don't have a new GPU tech to bank on. That would also explain why Apple went nVidia this generation.
     
  22. cirus macrumors 6502a

    Joined:
    Mar 15, 2011
    #22
    What are you talking about?

    The hd 4000 is easily in the same ballpark as the 330m.

    http://www.anandtech.com/show/6063/macbook-air-13inch-mid-2012-review/6
    http://www.anandtech.com/show/6409/13inch-retina-macbook-pro-review/10

    Look at the retina 13 inch macbook pro, its easily on par with the 2010 15 inch macbook pro 15 (using the 330m).

    [​IMG]

    [​IMG]

    [​IMG]

    Anandtech uses better testing methodology too.

    Look at the progress, a macbook air, using a ULV 17 watt chip is on par with a 330m with an arrandale cpu (much higher tdp).

    These are only two games, but the 330m is NOT consistently much faster than the hd 4000

    No, just no.

    "Going by benchmarks alone, the 9600M GT is still consistently 50-100% faster than Intel HD 4000... at everything."
     
  23. bill-p, Jan 19, 2013
    Last edited: Jan 19, 2013

    bill-p macrumors 65816

    Joined:
    Jul 23, 2011
    #23
    Uh... no. You're cherry-picking results.

    [​IMG]

    Both the GT 330M and Radeon HD 6490M suffer from lack of VRAM since both have only 256MB of VRAM. And if you have played StarCraft 2, you'd know that High requires 512MB of VRAM, and Ultra requires 1024MB (1GB) of VRAM. So neither GPU has enough VRAM for those scenarios.

    The HD 4000 can have up to 512MB of VRAM given sufficient amount of RAM (8GB installed), so it's not VRAM-bound in those situations.

    Half Life 2 is also a bad game to show GPU performance because it's both CPU-bound and VRAM-bound. That's why the HD 4000 again catches up. It's coupled with a faster CPU than what's inside the MBP 2010, plus it has more VRAM.

    Heck, the rMBP 13" has 768MB VRAM. That's 3 times what the GT 330M and Radeon HD 6490M have. In any situation where VRAM is a limiting factor, the HD 4000 will always have more wiggle room. But that doesn't mean it has more raw computational power.

    Not to state anything, but Anandtech hasn't been a good source of information for a while now. Their testing methodology has been questioned multiple times, and this is just one of those examples.

    Edit: But here's my assessment just to avoid confusion:

    I agree you can say that in general, the rMBP 13" as a whole system is on par with the 2010 MBP 15" both in graphics and general processing performance.

    But that's a whole machine. Like I showed, not all HD 4000 is made equal. You will still find that other HD 4000 in other machines lag behind GT 330M. And even then, the 2010 MBP 15" is a bad example because the GT 330M is underclocked in that machine + it has the absolute lowest amount of RAM that the GT 330M can take.

    [​IMG]

    [​IMG]
     
  24. luffytubby thread starter macrumors 6502a

    luffytubby

    Joined:
    Jan 22, 2008
    #24
    Good discussion guys.


    After all, I also think that tests gets skewed for certain games, depending on how CPU intensive they are.



    Guild Wars 2, being a recent example, on both OSX and Windows, this game eats CPU power like crazy. Since the next generation CPUs tend to have a 5-15% performance decrease just on their new architecture, this also inflates and skews the results.
     
  25. cirus macrumors 6502a

    Joined:
    Mar 15, 2011
    #25
    Really? I'm pointing out that the 9600 gt (and by extension the 330m) is not consistently 50-100% better than the hd 4000. Im not cherry picking results, there are only a few benchmarks there and I've listed all of the relevant ones (anandtech only has starcraft and half life 2 benches). I even made a note that I was only looking at two games. If there were more, I would have used them.

    I believe that in most cases, when a gpu runs out of memory it uses system RAM instead.

    Looking at your starcraft 2 results you are still proving my point, the 330m is not in a league of its own. Its exactly 25% faster than the hd 4000, indicating somewhat comparable performance. (not to mention that is a ULV hd 4000, and as you said yourself, the hd 4000 in other configurations can be much faster so the hd 4000 in a standard i5 ivy might be directly comparable to the the 330m).

    You say yourself "Even if HD 4000 can get 1000 GFLOPS, it'd still be limited by memory bandwidth since it has to leech off of system RAM. Meanwhile, the GT 330M enjoys more RAM bandwidth with GDDR3."

    Okay so the hd 4000 has more vram but its also much slower vram.

    The question ultimately comes down to "which is more playable?" when looking at graphical performance. I used high resolution and high setting comparisons because also intel tanks when settings and resolution are turned up.

    Look at my slide of 900p high starcraft and your slide of medium 800p starcraft. Look at the difference between the 6490 with 256 MB vram and the 6750 with 512 MB vram. The delta between the two is much smaller on the high 900p settings than the medium 800p setting, indicating that vram is not a problem (i.e. 256 mb vram is not limiting the 6490 at 900p high--> the difference between the two is what one would expect).

    At least anandtech does there benches in the same area for their comparisons, I have no idea whether the starcraft benchmarks on notebookcheck are at the beginning of a scene or in the middle of a heavy battle because a lot of them are user submitted. Stare at wall 60 fps -- great!, turn around 30 fps wtf? When notebookcheck does a review of a game themself, the benchmarks should be comparable, but when the results are user submitted, I'd take them with a grain of sand

    ----------

    I'm pretty sure that the 330m used about 20-25 watts. Arrandale standard voltage has a TDP of 35 watts.

    Considering that an UVL i5 ivy gets better performance than a standard voltage arrandale and the igp is about 70-80% of the 330m its nothing short of amazing how far power efficiency has come to fit that level of performance in about 20 watts total.
     

Share This Page