Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Should give the model number on the line below. ;)

Should but do not on many models, please read other forum threads and notice many have same issue (with older as well) :)

I can use the same command on my old to see the version. Before i changed the monitor i got the same if i recall correctly.
 
Has anyone been able to run the test with the 2.8 to see if the fire strike score is really any better or not?

I was waiting for those numbers as well. In another thread regarding 370x GPU performance here, a member was noting throttling when running Furmark. I also noticed that when I was running Afterburner on FireStrike, between load screens it looked like the core clock speed was only 400mhz. Could it be that the ATI driver is throttling certain apps by their file name (i.e. Furmark.exe)?

This could explain why 3DMark11 numbers suggested higher Firestrike numbers. If 3DMark11 wasn't on the drivers list of apps to throttle, but 3DMark/Firestrike was then those numbers were just throttled and the 3DMark11 numbers were legit. It makes me wish I would have benchmarked the 750m and the 370x in Windows back to back in WOW, D3, Tomb Raider, and a few others before returning the new system.

I have an auto-alert configured with BHPhoto for the fully loaded system with the 4980HQ and 1tb SSD. I am passing on the iMac for now due to folks saying the 295x runs too hot and throttles frequently. I may see if they update the Mac Pro at WWDC in 10 days before doing anything. Could mean steep deals on current systems, and also interesting possibilities for what they use in D300-D700 updates. I know that the Mac Pro in Windows can run XFire on games, which would have been interesting.
 
I was waiting for those numbers as well. In another thread regarding 370x GPU performance here, a member was noting throttling when running Furmark. I also noticed that when I was running Afterburner on FireStrike, between load screens it looked like the core clock speed was only 400mhz. Could it be that the ATI driver is throttling certain apps by their file name (i.e. Furmark.exe)?

This could explain why 3DMark11 numbers suggested higher Firestrike numbers. If 3DMark11 wasn't on the drivers list of apps to throttle, but 3DMark/Firestrike was then those numbers were just throttled and the 3DMark11 numbers were legit. It makes me wish I would have benchmarked the 750m and the 370x in Windows back to back in WOW, D3, Tomb Raider, and a few others before returning the new system.

I have an auto-alert configured with BHPhoto for the fully loaded system with the 4980HQ and 1tb SSD. I am passing on the iMac for now due to folks saying the 295x runs too hot and throttles frequently. I may see if they update the Mac Pro at WWDC in 10 days before doing anything. Could mean steep deals on current systems, and also interesting possibilities for what they use in D300-D700 updates. I know that the Mac Pro in Windows can run XFire on games, which would have been interesting.

That's normal 'throttling behavior' as in-between 3dmark benchmarks, the GPU goes back into '2D mode' which is a lower power mode. That's why if you really want to check out true throttling you have to be running a heavy game like a recent title like Witcher 3 or Dragon Age Inquisition. Lesser games like Diablo 3 or WoW usually won't cause a modern gpu to throttle.

The 'Heaven' benchmark is another good one to cause a gpu to throttle.

BTW Both modern AMD/Nvidia drivers have special code in them to recognize furmark when it's running and will throttle the gpu in order to save the gpu and especially the power circuits from frying up. Amusingly, both companies call furmark a 'power virus'
 
I was waiting for those numbers as well. In another thread regarding 370x GPU performance here, a member was noting throttling when running Furmark. I also noticed that when I was running Afterburner on FireStrike, between load screens it looked like the core clock speed was only 400mhz. Could it be that the ATI driver is throttling certain apps by their file name (i.e. Furmark.exe)?

This could explain why 3DMark11 numbers suggested higher Firestrike numbers. If 3DMark11 wasn't on the drivers list of apps to throttle, but 3DMark/Firestrike was then those numbers were just throttled and the 3DMark11 numbers were legit. It makes me wish I would have benchmarked the 750m and the 370x in Windows back to back in WOW, D3, Tomb Raider, and a few others before returning the new system.
If anything an AMD driver would push the clocks during a benchmark not reduce them. They want to look good. Furmark is not a benchmark but a stress testing ultility and all drivers try to throttle is because they say it produces such an unrealistic high load that even standard clocks are not save.
AMD definitely does not throttle 3dmark on purpose. If it throttles, it is a actual heat issue.
 
They need to do some of their benchmarking again. How does a 2015 rMBP with AMD GPU beat an iMac5K in photoshop? Even the base model iMac5k has a superior CPU/GPU...I seriously doubt it's the SSD difference either as photoshop files aren't GB in size.

They didn't test the 15" in that test, that was the 13". Also worth noting that they are running the maxed 2015 MBP with 4980HQ. Looks the they have some additional tests (likely gaming) coming shortly.
 
They didn't test the 15" in that test, that was the 13". Also worth noting that they are running the maxed 2015 MBP with 4980HQ. Looks the they have some additional tests (likely gaming) coming shortly.

rp2015_iris.png


It's probably a mis-print...
 
Looks the they have some additional tests (likely gaming) coming shortly.

Gaming test on barefeats are usually ****. They test those horrendous cider ports that run like ****.
Instead of testing state of the art game like Elite Dangerous which received an amazing native port... :roll eyes:
Also they don't test Final Cut Pro....
 
Gaming test on barefeats are usually ****. They test those horrendous cider ports that run like ****.
Instead of testing state of the art game like Elite Dangerous which received an amazing native port... :roll eyes:
Also they don't test Final Cut Pro....

Agreed, if FCPX truly renders close to 2x faster than the 750M I may just upgrade to the 2015 anyway even without the major dGPU upgrade.
 
Agreed, if FCPX truly renders close to 2x faster than the 750M I may just upgrade to the 2015 anyway even without the major dGPU upgrade.

I would laugh if next Macbook Pro have no Skylake but next gen AMD processor and Apple completely gets rid of dGPU ;) It does look like they cooperate tighter and tighter with AMD, trying to be more independent from Intel.
 
I would laugh if next Macbook Pro have no Skylake but next gen AMD processor and Apple completely gets rid of dGPU ;) It does look like they cooperate tighter and tighter with AMD, trying to be more independent from Intel.

Where do you get this completely random idea from? :rolleyes:
 
Where do you get this completely random idea from? :rolleyes:

Well some people did mention here that Apple might want to get rid of dGPU, especially with iGPUs getting better and better. And I read that next gen AMD CPU "ZEN" is supposed to at least match Intel's Skylake.
 
Well some people did mention here that Apple might want to get rid of dGPU, especially with iGPUs getting better and better. And I read that next gen AMD CPU "ZEN" is supposed to at least match Intel's Skylake.

AMD's iGPU's have always been the equal and usually much better than Intel's iGPU's. The problem is that their cpu's just plain suck. None of their cpu's can beat a i5/i7. In fact, their latest chips aren't even up to Sandy Bridge level yet. I do miss the early years of the last decade when A64 chips were kicking Pentium ass.... That forced Intel to release the Core 2 duo series which soundly trounced everyone else. If AMD was still competitive today, Broadwell would've came out last year and we'd be typing on Skylake rMBPs...
 
Nah, the issue is the MASSIVE CUDA issue going on between nVidia and Apple.

All 15" rMBPs with the 750m are unable to successfully use CUDA without massive display corruption, lockups, etc.

It's pretty brutal.

I also suspect this is part of the issue. If you can't use the CUDA cores, what's the point of paying for them?

The problem has been going on for a long time, with no fix from Nvidia. I'm sure Apple was not too pleased, especially if the chips are more expensive for a feature that doesn't work.
 
My order just came into the store today. Unfortunately, work decided to keep me till 10:30 today... Hopefully I'll get off before the store closes tomorrow so I can do my own testing.
 
AMD is the best choice in my opinion since all new games will be optimized for AMD GPU's (the new Xbox and playstation both use AMD GPU's). I can see Apple sticking with AMD over the next couple of years.
 
AMD is the best choice in my opinion since all new games will be optimized for AMD GPU's (the new Xbox and playstation both use AMD GPU's). I can see Apple sticking with AMD over the next couple of years.

It doesn't really matter if the GPU itself is 30-50% slower, does it now? No matter how much you optimise, you can't cross that gap, unless you are voluntarily sabotaging your code from running on other platforms.
 
Summing up for those unable to watch videos:

3d mark 11: 3975

Firestrike: 2203

Ice Storm: 102246

Cloud Gate: 11672

PCMark 8 Home Accelerated: 36029

I'm still waiting for real world benchmarking between the 750m/m370x on NEW games, not 2 year old ones...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.