Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Morning guys, looks like anandtech posted an update on the m370x a few hours ago:

Article Link

For fun, I threw 3DMark 11 on the 2014 MBP (750m) and it scored:

Overall: P2671
Graphics: 2413
Physics: 9010
Combined: 2134

To Compare, here is what we saw out of the 370:

Overall: P4040
Graphics: 3682
Physics: 8988
Combined: 3690

I have been doing some math, and came up with the following statistics:

3DMark 11 Graphics: 52.5% Improvement
3DMark 11 Overall: 51.25% Improvement
3DMark 11 Combined: 72.9% Improvement
3DMark Ice Storm (Graphics): 56% Improvement
3DMark Cloud gate (Graphics): 20.28% Improvement
3DMark Sky Diver (Graphics): 28.21% Improvement
3DMark FireStrike (Graphics): 23.71% Improvement

So that provides some of the picture. The 3DMark tests for Fire Strike almost make me question if the driver just needs some development time. The 3DMark 11 tests suggest a larger performance margin. Perhaps the driver isn't very efficient with tessellation?
 
In all fairness Maxwell is updated Kepler architecture. It has just bigger internal caches and lacks DP engines, thats why its so efficient.

Also, I believe Apple went with GCN architecture because its more mature in fact more efficient than any Nvidia Arch(Green500 list says that GCN is the most power efficient GPU arch. that has ever been on this planet, and lets look how big performance Apple got from 129W of power on Tahiti chip in Mac Pro) and will last longer compared to any Nvidia Architecture.

Lets look how fast Kepler arch gets old and outdated in games(Witcher 3, Project Cars). GCN with time gets more and more performance, which is not and will never be the case of any Nvidia arch(The way its meant to be milked...).

Also the factor of OpenCL performance is biggest thing and most important for Apple. Not to mention Mantle/Vulkan, which i believe soon will be flagship API for OSX.

If that's the case, then Ivy Bridge, Haswell, and now Broadwell are all Sandy Bridge chips too as they're basically evolutionary changes each other. In fact really the last huge Intel architectural change was the C2D gen that got Apple to jump into bed with Intel.

Ok how about this? If this chip is being the superior OpenCL choice over Nvidia (and the main reason why Apple went from Nvidia to AMD), then WTH did Apple just finally decide to put it in the rMBP now?!!! For Pete's sake they could've slapped this Methuselah chip into the 1st gen rMBP. It's almost like saying 'Ooops guys! We've been using the wrong gpu all this time, so we're going to do a u-turn and put a 3 year old chip in today!'

I'm still quite flummoxed why Apple is doing this to their highest priced laptop...
 
Last edited:
Hi Mike,

Would you be able to do a xbench http://www.xbench.com and post that screenshot. I know it's an old benchmark but just really curious.

Thanks again for posting all those benchmarks. It really helps some us decide if they want to upgrade or not.

Morning guys, looks like anandtech posted an update on the m370x a few hours ago:

Article Link

For fun, I threw 3DMark 11 on the 2014 MBP (750m) and it scored:

Overall: P2671
Graphics: 2413
Physics: 9010
Combined: 2134

To Compare, here is what we saw out of the 370:

Overall: P4040
Graphics: 3682
Physics: 8988
Combined: 3690

I have been doing some math, and came up with the following statistics:

3DMark 11 Graphics: 52.5% Improvement
3DMark 11 Overall: 51.25% Improvement
3DMark 11 Combined: 72.9% Improvement
3DMark Ice Storm (Graphics): 56% Improvement
3DMark Cloud gate (Graphics): 20.28% Improvement
3DMark Sky Diver (Graphics): 28.21% Improvement
3DMark FireStrike (Graphics): 23.71% Improvement

So that provides some of the picture. The 3DMark tests for Fire Strike almost make me question if the driver just needs some development time. The 3DMark 11 tests suggest a larger performance margin. Perhaps the driver isn't very efficient with tessellation?
 
3DMark 11 Graphics: 52.5% Improvement
3DMark 11 Overall: 51.25% Improvement
3DMark 11 Combined: 72.9% Improvement
3DMark Ice Storm (Graphics): 56% Improvement
3DMark Cloud gate (Graphics): 20.28% Improvement
3DMark Sky Diver (Graphics): 28.21% Improvement
3DMark FireStrike (Graphics): 23.71% Improvement
Only Firestrike is really indicative of performance. Cloud Gate is for mobile socs in the 2-4W region. Sky Driver is a middle of the road thing. But Firestrike represents current gaming engines.
The other benchmarks basically run a graphics load that reaches insanely high fps which test the cpu more than the GPU. Or differences in execution unit distribution between different chips. The a twice as high firestrike score on the other hand actually translates to about twice the fps in a game made in the last 3 years. And 20% to about 20% actual performance.
You can always find benchmarks that show greater differences one should always keep in mind WHAT a benchmark measures to know what it says. And out of those only Firestrike measures PC gaming performance. Cloud Gate measures Android/iOS gaming performance. Sky driver measures the limits of gaming on mobile (Android/iOS) or very old PC titles.
 
Morning guys, looks like anandtech posted an update on the m370x a few hours ago:

Article Link

For fun, I threw 3DMark 11 on the 2014 MBP (750m) and it scored:

Overall: P2671
Graphics: 2413
Physics: 9010
Combined: 2134

To Compare, here is what we saw out of the 370:

Overall: P4040
Graphics: 3682
Physics: 8988
Combined: 3690

I have been doing some math, and came up with the following statistics:

3DMark 11 Graphics: 52.5% Improvement
3DMark 11 Overall: 51.25% Improvement
3DMark 11 Combined: 72.9% Improvement
3DMark Ice Storm (Graphics): 56% Improvement
3DMark Cloud gate (Graphics): 20.28% Improvement
3DMark Sky Diver (Graphics): 28.21% Improvement
3DMark FireStrike (Graphics): 23.71% Improvement

So that provides some of the picture. The 3DMark tests for Fire Strike almost make me question if the driver just needs some development time. The 3DMark 11 tests suggest a larger performance margin. Perhaps the driver isn't very efficient with tessellation?

Thank you and for the link :) No go play some WoW at retina res and let us know avg. fps ;)
 
Only Firestrike is really indicative of performance. Cloud Gate is for mobile socs in the 2-4W region. Sky Driver is a middle of the road thing. But Firestrike represents current gaming engines.
The other benchmarks basically run a graphics load that reaches insanely high fps which test the cpu more than the GPU. Or differences in execution unit distribution between different chips. The a twice as high firestrike score on the other hand actually translates to about twice the fps in a game made in the last 3 years. And 20% to about 20% actual performance.
You can always find benchmarks that show greater differences one should always keep in mind WHAT a benchmark measures to know what it says. And out of those only Firestrike measures PC gaming performance. Cloud Gate measures Android/iOS gaming performance. Sky driver measures the limits of gaming on mobile (Android/iOS) or very old PC titles.

I do believe the graphics card comes in to play on each of those tests. Remember that the 750m reference system Im using has the 4980HQ 2.8ghz CPU in it, and the 370x has a 4870 (about 8-10 percent slower.) In your scenario, the tests that were CPU bound would always favor the 750m system due to its faster CPU. In fact, the 370 smoked it every time. I do totally agree with your FireStrike comment though, its my Go To for benchmarks in general. However, 3DMark11 is pretty relevant as well and strongly favored the 370 by margins over 50 percent. I was expecting that to translate to a score closer to 3000 in FireStrike Graphics.

----------

Thank you and for the link :) No go play some WoW at retina res and let us know avg. fps ;)

LOL I know this ones important to you. I've had a love affair on and off with WOW since the 2004 launch, so I feel you. My wife plays as well. Ill pull some frame rates at Retina and at some additional settings to give you an idea of what Im seeing. Give me about an hour.
 
Could you link me the benchmark you base it on? Because all results I have seen show that 980M obliterates the AMD cards in compute, while drawing less power. Unless you are talking about FP64, it is true that nvidia cards were not really made for high-performance double-precision computations.
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-12.html

One more thing. I suggest running the same app, thats using OpenCL, real world benchmarks, both on GCN card and Maxwell.

You will see that GCN cards are faster. How much? That depends on the app.
 
I really think this is the answer, so when in october they launch broad well MBP with the NV960 they can market it.

I doubt we will see nvidia within the next couple years.

----------

LOL I know this ones important to you. I've had a love affair on and off with WOW since the 2004 launch, so I feel you. My wife plays as well. Ill pull some frame rates at Retina and at some additional settings to give you an idea of what Im seeing. Give me about an hour.

I have had this same love affair and am awaiting this as well. Are you going to run it in windows, OS X, or both?
 


There are no AMD versus Nvidia benchmarks on that link. Plus it is desktop class. The mobiles are going to have a narrow power band to stay inside of. How well something does when "max power is no object" is not really going to be highly relevant to the internals of a rMBP .... unless want to have another round of product recalls in a couple of years.

I doubt going to find a max compute efficiency inside of a narrow power band benchmarks on these tech spec porn websites. The container ( the rMBP case ) the mobile GPU is placed inside of can have impact on the selection. "runs cooler most of the time but spikes higher when pressed" may not be as good as "runs more predictably thermally in the same range on all workloads".
 
I doubt we will see nvidia within the next couple years.

----------



I have had this same love affair and am awaiting this as well. Are you going to run it in windows, OS X, or both?

I ran this in OS X only for now. So in your hut at your base with WarMaster Zog, with all settings a Retina using the "Good" preset slider (which sets everything to good and 4x sampling) on the 370x I get 52 fps at rest facing Zog at his table. Then I turn around and run outside, go down the ramp and at the stairs on the left I look out to the center of your Garrison facing the fire from the top of the stairs, I get 27 fps.

On the 750m, I get 29fps facing Zog, and then at the stairs facing down at the flame I get 20.1 fps. So its definitely around 30-35% faster in some areas, and maybe even more indoors based on lighting effects in play.

I would say that the 750m is barely playable at Retina but the new machine definitely is playable.
 
XBench:

m370x
Quartz Graphics Test: 142.38
Line: 165.56
Rectangle: 139.78
Circle: 124.78
Bezier: 155.79
Text: 133.59

Disk Test: 1612.13


750m
Quartz Graphics Test: 101.07
Line: 103.80
Rectangle: 86.11
Circle: 82.40
Bezier: 147.24
Text: 107.56

Disk Test: 932.63
 
I ran this in OS X only for now. So in your hut at your base with WarMaster Zog, with all settings a Retina using the "Good" preset slider (which sets everything to good and 4x sampling) on the 370x I get 52 fps at rest facing Zog at his table. Then I turn around and run outside, go down the ramp and at the stairs on the left I look out to the center of your Garrison facing the fire from the top of the stairs, I get 27 fps.

On the 750m, I get 29fps facing Zog, and then at the stairs facing down at the flame I get 20.1 fps. So its definitely around 30-35% faster in some areas, and maybe even more indoors based on lighting effects in play.

I would say that the 750m is barely playable at Retina but the new machine definitely is playable.

Thanks! Looks like I'll be happy when I get mine this coming Friday. Albeit, I'll still sell it and get Skylake but it looks like it will get the job done. It's pretty big upgrade from where I'm coming from. My 2010 MBP really wasn't playable on low settings anymore and my 09 MP with a flashed 5770 was only getting 40-50 on good settings at 1920x1080
 
Awesome. Disk speed improvements are crazy. My base 2012 rMBP gets 470;quite an improvement.

Do you know the results for the

OpenGL Test and User Interface Test?

Thanks

XBench:

m370x
Quartz Graphics Test: 142.38
Line: 165.56
Rectangle: 139.78
Circle: 124.78
Bezier: 155.79
Text: 133.59

Disk Test: 1612.13


750m
Quartz Graphics Test: 101.07
Line: 103.80
Rectangle: 86.11
Circle: 82.40
Bezier: 147.24
Text: 107.56

Disk Test: 932.63
 
Its not really about putting the latest and greatest computer for gaming... its about using the best technology you can.

Being a laptop and knowing that Apple tries to get a balance between battery life, because it's a laptop, performance and portability(thinness). I do not say they should try a "Why not both!", maybe we are not there yet in terms in terms of technology. Do we have any stats comparing energy efficiency of M370X vs the one you praise as superior 950M. Maybe this was the most efficient way of improving performance and also keeping the aesthetic factor. Cause you see they mentioned the increase in battery life and where the new GPU excels.
 
Being a laptop and knowing that Apple tries to get a balance between battery life, because it's a laptop, performance and portability(thinness). I do not say they should try a "Why not both!", maybe we are not there yet in terms in terms of technology. Do we have any stats comparing energy efficiency of M370X vs the one you praise as superior 950M. Maybe this was the most efficient way of improving performance and also keeping the aesthetic factor. Cause you see they mentioned the increase in battery life and where the new GPU excels.

Yes, we have tons of reviews and analyses of the power efficiency of current Nvidia and AMD architectures. You will find them literally everywhere on the internet. Although the link koyoot has shared earlier is quite interesting, it suggest that Nvidia cards are more efficient primarily because they can conserve power much better when its not needed. And while I take note of deconstruct60's idea of predictable thermal behaviour, I would still say that a card that on average draws 25% less power is a winner, even if its power draw might spike under certain conditions.

I would also disagree that battery life is important when a dGPU is concerned. Activating the dedicated GPU will shorten your battery life, no matter how you look at it. The battery life improvements more likely come from a bigger, more efficient battery and more efficient components.
 
There are no AMD versus Nvidia benchmarks on that link. Plus it is desktop class. The mobiles are going to have a narrow power band to stay inside of. How well something does when "max power is no object" is not really going to be highly relevant to the internals of a rMBP .... unless want to have another round of product recalls in a couple of years.

I doubt going to find a max compute efficiency inside of a narrow power band benchmarks on these tech spec porn websites. The container ( the rMBP case ) the mobile GPU is placed inside of can have impact on the selection. "runs cooler most of the time but spikes higher when pressed" may not be as good as "runs more predictably thermally in the same range on all workloads".

61465.png


Ok, how about this? Radeon HD 7750 is 512 GCN core GPU and its faster than GM107 which is in GTX850M/860M/950M/960M.

Both cores are used in mobile, but remember R9 M370X is 640 GCN core GPU so it will be even faster than GTX 750 Ti.

Desktop parts and mobile parts are just different by nomenclature. Cores are the same, and the difference is a little bit different package that the die comes and is connected with the board. This is one of the real world tests, that shows that in OpenCL, even if Xbench - gaming benchmark, shows otherwise.

P.S. I would love to see how AMD GPUs wipe down Maxwell cards in Final Cut Pro X. Or any real world app that uses openCL. First proof we have had in Vegas Pro.

Also the most important fact. Nvidia in march of this year started supporting OpenCL 1.2. From december of last year AMD GCN cards support OpenCL 2.0.

Thats why Apple went with AMD gpus in the first place. Also Freesync, and cheaper cards are the other factor ;).
 
Last edited:
Image

Ok, how about this? Radeon HD 7750 is 512 GCN core GPU and its faster than GM107 which is in GTX850M/860M/950M/960M.

Both cores are used in mobile, but remember R9 M370X is 640 GCN core GPU so it will be even faster than GTX 750 Ti.

Desktop parts and mobile parts are just different by nomenclature. Cores are the same, and the difference is a little bit different package that the die comes and is connected with the board. This is one of the real world tests, that shows that in OpenCL, even if Xbench - gaming benchmark, shows otherwise.

P.S. I would love to see how AMD GPUs wipe down Maxwell cards in Final Cut Pro X. Or any real world app that uses openCL. First proof we have had in Vegas Pro.

Also the most important fact. Nvidia in march of this year started supporting OpenCL 1.2. From december of last year AMD GCN cards support OpenCL 2.0.

Thats why Apple went with AMD gpus in the first place. Also Freesync, and cheaper cards are the other factor ;).

You really think there's any hope of Freesync working? I mean that alone is worth an upgrade in my book but I can't see it working sadly.

Also just got this update from UPS, seems my Mac is just touring Asia at the moment...
 

Attachments

  • image.jpg
    image.jpg
    104.5 KB · Views: 180
Scolonator: I have no idea, but IMO it would not be logical for Apple to make a monitor of any type and have a AMD GPUs and not use Freesync on them.

Of course, they can ban it for any external solution(Freesync monitors from any other vendor than Apple would not work with Apple hardware), but we have to test it and see. So far there is no word in there.

Also Freesync, being software update, can be added in future to OSX Drivers.
 
Scolonator: I have no idea, but IMO it would not be logical for Apple to make a monitor of any type and have a AMD GPUs and not use Freesync on them.

Of course, they can ban it for any external solution(Freesync monitors from any other vendor than Apple would not work with Apple hardware), but we have to test it and see. So far there is no word in there.

Also Freesync, being software update, can be added in future to OSX Drivers.

True but I'm more thinking along the lines of getting Freesync working in Bootcamp, that would make games infinitely more playable!
 
You really think there's any hope of Freesync working? I mean that alone is worth an upgrade in my book but I can't see it working sadly.

Also just got this update from UPS, seems my Mac is just touring Asia at the moment...

What is the format for the UPS tracking number? I have three numbers that the order status page has given me but none seem to work on the UPS page..

Is it under Delivery Number? Mines 10 characters which seems too short...
 
True but I'm more thinking along the lines of getting Freesync working in Bootcamp, that would make games infinitely more playable!
I will only work on external monitors that support it, but in theory with uptodate catalyst drivers that should work just fine.
 
Could you link me the benchmark you base it on? Because all results I have seen show that 980M obliterates the AMD cards in compute, while drawing less power. Unless you are talking about FP64, it is true that nvidia cards were not really made for high-performance double-precision computations.

Here is one set of benches I have seen (for desktop parts):

http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/15

Relevant quotes:

"Overall then the new GTX Titan X can still be a force to be reckoned with in compute scenarios, but only when the workloads are FP32. Users accustomed to the original GTX Titan’s FP64 performance on the other hand will find that this is a very different card, one that doesn’t live up to the same standards."

"What remains to be seen then is whether this graphics/FP32-centric design is a one-off occurrence for 28nm, or if this is the start of a permanent shift in NVIDIA GPU design."

---------------------------------------------------------------------------------------
You'll see that the recent Kepler/Maxwell parts are single precision optimized which makes them more energy efficient (similar to past AMDs) but poorer at higher precision compute, in general.

The gain in energy efficiency comes from the strong optimization of Kepler/Maxwell parts for single precision and therefore gaming, whereas the more recent AMD chips, and therefore the bigger AMD chips were built for both compute/games like Nvidia's Fermi generation was.

I don't think they are comparable on power draw at max performance, FYI.
 
Last edited:
Hello guys! So I wanted to throw out a general question and see what everyone's personal opinion is on this and I'm sure I'm not the only one thinking this as well. Basically, is t worth upgrading to this newer updated model or is it best to wait for the next refresh? For reference, I am on an Early 2013 rMBP with the 2.7 quad core i7 16 GB ram, 512 SATA SSD and the GT650M card. I wanted to see some real world benchmarking and it looks like MacDevMike has shared some good information on this but I'm still curious if it's worth selling mine and upgrading now or waiting another year for skylake/nvidia refresh. As for gaming is concerned, I only play WoW. Currently I play at Retina Res on low settings and get 20-30 FPS in intense areas and 50-60 in others but is it worth basically upgrading to this Newer machine. Thanks!
 
Hello guys! So I wanted to throw out a general question and see what everyone's personal opinion is on this and I'm sure I'm not the only one thinking this as well. Basically, is t worth upgrading to this newer updated model or is it best to wait for the next refresh? For reference, I am on an Early 2013 rMBP with the 2.7 quad core i7 16 GB ram, 512 SATA SSD and the GT650M card. I wanted to see some real world benchmarking and it looks like MacDevMike has shared some good information on this but I'm still curious if it's worth selling mine and upgrading now or waiting another year for skylake/nvidia refresh. As for gaming is concerned, I only play WoW. Currently I play at Retina Res on low settings and get 20-30 FPS in intense areas and 50-60 in others but is it worth basically upgrading to this Newer machine. Thanks!

If you already have a retina macbook, with dedicated graphics, keep it.. Wait for whatever apple will come up in the next years, as this new card seems to be actually old..

But I don't know if you should trust me, I'm using the same macbook white for almost 6 years, and only now i'm going to buy a new one :p
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.