Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Twimfy

macrumors 6502a
Original poster
Sep 11, 2011
888
246
UK
Ok so at present I have a GTX 650 in my Octa-Core 3,1, it was a card I just happened to have spare when the MP arrived a few months ago.

I only ever game in Bootcamp and for the most part the 650 is fine, but I would like to get a little more performance as newer releases are making me move the settings and resolution down more and more as time goes by, which is to be expected. I also play a lot of CSGO and I'd love to have everything maxed while enjoying a good framerate.

On the OS X side of things I spend quite a bit of time in Unity.

I have heard a lot of people say that there are various bottlenecks in the 3,1 that limit the throughput of more powerful cards.

Every time I've seen this mentioned it seems to erupt in an argument with no clear answer.

I really would like to know what the absolute limit is for the 3,1 (with a little bit of proof) before performance just gets totally crippled and a purchase becomes a waste of money. I'm not at all interested in ATi cards so any Nvidia advice would be much appreciated.

I'm considering a 680 but then something from the 7 series could also be nice too.

Thanks in advance.
 
I did some benchmarks a few weeks ago in a thread here: Post1 and Post2

tl;dr: There's no general answer, it all depends if a game is already CPU bound with your current GPU or not.

From my personal observations in OS X (I had the following GPUs recently in a 3,1: HD 5770, HD 5870, HD 7870, R9 280, GTX 570, GTX 760) the CPU overhead of Nvidia drivers is way bigger than with AMD drivers. In heavily CPU-bound games like CS:GO (OS X version) even a crappy HD 5770 will outperform every single Nvidia card you can think of.
In more GPU demanding games like Metro:Redux (or Unigine Benchmark) more powerful cards can stretch their legs. The 3,1 will still be a little slower than my i5 Hackintosh, but it isn't as notable as in CS:GO.

I wouldn't go beyond GTX 680 or R9 280(X) level in a 3,1, but up to this an upgrade can be worth it (of course depending on your games). I also wouldn't recommend a AMD card for a 3,1 because of the known performance bug which requires deleting a few kext files.
If you want me to do any special benchmark, just ask for it, most cards are still nearby :)
 
I did some benchmarks a few weeks ago in a thread here: Post1 and Post2

tl;dr: There's no general answer, it all depends if a game is already CPU bound with your current GPU or not.

From my personal observations in OS X (I had the following GPUs recently in a 3,1: HD 5770, HD 5870, HD 7870, R9 280, GTX 570, GTX 760) the CPU overhead of Nvidia drivers is way bigger than with AMD drivers. In heavily CPU-bound games like CS:GO (OS X version) even a crappy HD 5770 will outperform every single Nvidia card you can think of.
In more GPU demanding games like Metro:Redux (or Unigine Benchmark) more powerful cards can stretch their legs. The 3,1 will still be a little slower than my i5 Hackintosh, but it isn't as notable as in CS:GO.

I wouldn't go beyond GTX 680 or R9 280(X) level in a 3,1, but up to this an upgrade can be worth it (of course depending on your games). I also wouldn't recommend a AMD card for a 3,1 because of the known performance bug which requires deleting a few kext files.
If you want me to do any special benchmark, just ask for it, most cards are still nearby :)


I think I'm beginning to understand now. I expected way more from the 650 in CSGO compared to other cards I've had in the past and yet still I'm having to drop a few things down some notches to keep the frame rate steady (although for some reason Windows 10 has given me a good 10-20fps boost over Win8.1...I have no idea why, there doesn't seem to by any rhyme or reason) and of course I forgot that it was a fairly CPU intensive game but I think it's setup more for hyperthreaded cores than physical cores/dual CPU's etc (unless there are launch options I can use which help (i've tried -threads 2 and -threads 4 and it screwed the FPS).

So it seems I may just have to go as far as a 680 and leave it at that, I really don't want to be messing with AMD cards and Kext's etc as my Hackintosh days were over in 2013 and I want to keep it that way.
 
From the software's point of view there's no difference between a logical core or a physical core, I'd assume that CS:GO just needs more single core performance (never verified the load balance though). Maybe the FSB is also a bottleneck for this specific game.

If you want, we can both do the same benchmark in CS:GO and compare scores, then you'll see if a better GPU will improve anything. I'll test with R9 280 and GTX 760 in Win & OS X, with maxed out settings @ 1920x1200.
 
From the software's point of view there's no difference between a logical core or a physical core, I'd assume that CS:GO just needs more single core performance (never verified the load balance though). Maybe the FSB is also a bottleneck for this specific game.

If you want, we can both do the same benchmark in CS:GO and compare scores, then you'll see if a better GPU will improve anything. I'll test with R9 280 and GTX 760 in Win & OS X, with maxed out settings @ 1920x1200.

That would be awesome (although OS X isn't necessary as I never play in OS X due to mouse smoothing issues). I'm probably not going to be able to do that until tomorrow evening but I'll post my results as soon as I'm able.
 
Okay, I'll report back :)

Btw, I'm just asking myself why almost every GPU works out of the box at the first boot in OS X without doing *anything*, and Windows requires installing drivers and restarting over and over again? :mad:
Should be the other way around with a proprietary vs. generic OS...

Btw, mouse acceleration can be disabled with 3rd party tools.
 
Okay, I'll report back :)

Btw, I'm just asking myself why almost every GPU works out of the box at the first boot in OS X without doing *anything*, and Windows requires installing drivers and restarting over and over again? :mad:
Should be the other way around with a proprietary vs. generic OS...

Btw, mouse acceleration can be disabled with 3rd party tools.

System spec:

2x2.8Ghz
10GB Ram 800Mhz
Zotac GTX 650 1GB GDDR5


CS:GO Settings:
Everything maxed at 1080p apart from any vertical sync buffering which was just set to disabled. Motion blur enabled.

Win10 64-bit fully updated (running on a 5400RPM HDD), very latest Nvidia Driver with no game optimisation results:

7487 frames 88.016 seconds 85.06 fps (11.76ms/F) 7.930 fps variability

-------------------

Very weirdly the game seemed to be running very smoothly with those settings maxed. Going to give those maxed out settings a go in deathmatch and see how it performs.
 
Okay, here we go:
As I've said, 1920x1200, everything maxed out, benchmark fps_test3 from 4th post in this thread, same MacPro as yours.

AMD R9 280:
OS X: 113fps
Win7: 145fps

Nvidia GTX 760:
OS X (stock): 92fps
OS X (WebDriver): 106fps
Win7: 141fps

As mentioned in one of the GTX 680 thread, this doesn't apply to all maps, OS X performance of some newer maps is rather crappy on my Nvidia card while they run better on the AMD. The Web Driver improves the performance notably (more than +14fps avg would make you think), because it removes some short but huge fps drops.
In Windows everything is smooth with both cards as far as I can tell.

Both cards are flashed & running @ PCIE 2.0 btw.
 
Okay, here we go:
As I've said, 1920x1200, everything maxed out, benchmark fps_test3 from 4th post in this thread, same MacPro as yours.

AMD R9 280:
OS X: 113fps
Win7: 145fps

Nvidia GTX 760:
OS X (stock): 92fps
OS X (WebDriver): 106fps
Win7: 141fps

As mentioned in one of the GTX 680 thread, this doesn't apply to all maps, OS X performance of some newer maps is rather crappy on my Nvidia card while they run better on the AMD. The Web Driver improves the performance notably (more than +14fps avg would make you think), because it removes some short but huge fps drops.
In Windows everything is smooth with both cards as far as I can tell.

Both cards are flashed & running @ PCIE 2.0 btw.

Hmmm PCIE 2.0, I'm assuming in Win10 that should be already g2g assuming it's a software limitation in OS X?

If that's the case then it looks like I could gain almost a 100% FPS boost with a 680.
 
No, it's an EFI restriction which also applies to Windows. Shouldn't matter very much in gaming though.

I wouldn't expect the GTX 680 to be notably faster than my cards in a 3,1 as they're already a little limited (performance is better in my Hackintosh, don't have any numbers by hand though). In other games (or in raw compute power) this will differ obviously.
 
Yep, I fixed that in my tests by deleting the mentioned PowerManagement kexts. My AMD performance would have been much worse with those kexts in place.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.