Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

lewnworxx

macrumors member
Original poster
Mar 19, 2015
84
10
So after a LOT of trials and tribulations, I finally have 2 970's running in my 5.1 cMP.

My first foray was with a flashed 680, which had all kinds of issues. It'd lock up the entire box mid render, would corrupt the video display, and I suspect the card itself was bad.

However, it greatly sped up the blender Renders of the stuff I'm modeling in Rhino.

I further determined that using the card for running the monitors DID put a render time hit, so pulled out a spare GT120 that the cMP's shipped with, stuffed that in slot 4 and plugged my 2 24" 1920x1200 Dells into that. That was worth about an 18% increase in render speed. That doesn't seem like much until you have render times running well over 3 days for a 10 second anim. I don't game at all, so could care less about the card's performance in the display arena. It'll play a 1920x1080 QT @ 24 FPS without flinching, so that's fine for my stuff.

So after returning the faulty flashed GTX680, I picked up a EVGA GTX970SC. That worked great once I got the security patch related driver issues sorted out. After reading Machine's post with the guy in the UK using a pair of cards off the 2 6 pins with splitter cables, and doing a several day render while running hardware monitor and not seeing any wattage pulls of over 36 watts on the boost line, I figured I could give a second card a go.

So I ordered the splitters off the guy on EBay (got here in 2 days, whee), and hopped over to Fry's to pick up a second 970. Dunno what it is with EVGA but do they have to have 9 different flavors of the week for each of these cards? Got home before I realized the card was not even close to the same as the 970SC. This one was the "SSC" and the card was longer (bout an inch), and took a 6 pin AND and 8 pin, where as the "SC" only took 2 6 pins. That didn't bode well, so I pulled the existing 970 out, broke out EVGA's dual six to single 8 pin cable adapter and plugged it in. It booted (probably because my video is of the GT120) although in system info it showed up as a Nvidia Video Card with 256 MB of VRAM. That didn't look promising.

Cuda-Z caused it to lock up the box completely the second it launched. Rebooted again and fired up blender and it didn't recocognize the 970 SSC as even having a GPU. Rather than jack around with it I took that card back and looked for another 970 SC, but they didn't have any, although they did have a 970 FTW, which according to the box also had 2 6 pins as it's power input. Ok fine, we'll try that. Got it out of the box and it's the same length as the SC, although it's fin layout for the cooling is slightly different and some other components are in different places. It is a different board. How different? Who knows.

Put the 970 FTW in by itself, and it looked correct according to the system report, CUDA-Z showed it correctly, and it showed up in blender. So I dropped in the second 970 (the previously pulled SC) hooked up the splitter cables and powered up the box. Booted fine (10 second boots on a Samsung 1TB EVO 850), CUDA-Z showed both cards available and reporting (practically no difference in the various values on the performance tab between the 970 SC and the 970 FTW.


For comparison here's the results of the 2nd Mike Pan Render benchmark (the one with 2 BMW's) in Blender:

This on a 2x 2.26 Quad Core Xeon 5.1 box running OS X 10.10.2

CPU Only on GTX 285:
04:37.58

GPU On Flashed GTX 680 4GB:
01:50.37 (320 x 270 Tile)

GPU On Single EVGA GTX 970 SC 4GB:
01:22:50 (320 x 270)

GPU On EVGA GTX 970 SC 4GB + EVGA GTX 970 FTW 4GB:
0:44:44 (320 x 270)

Render times on my own benchmark of what's my normal type of thing I'm rendering:

CPU GTX 285 Mac Version:
21:25.18

GPU Single EVGA GTX680 4GB Mac Flashed Rom:
12:48.15

GPU Single EVGA GTX 970 SC 4GB:
07:36:31 (320 x 270)

GPUS EVGA GTX 970 SC 4 GB + EVGA GTX 970 FTW 4 GB:
03:51.91

I'd say it's well worth the performance increase for the pair of the cards. With both cards going full tilt on renders HW Monitor is reporting:

327W total (PS 1 Line 1)
47.2W PCIE Slot 1 12V Line
46.7W PCIE Slot 2 12V Line
32.8W PCIE Slot 1 12V Boost Line
29.9W PCIE Slot 2 12V Boost Line
13.2W PCIE Slot 4 12V Line (The GT120 video card driving the monitors).

I'd say this is well in the safe zone. For a single box, no re wiring, no external or internal 2nd PS for about $650 out of pocket to go from over 21 minutes a frame just under 4 and cut my render times from 85 hours to 15? Oh hells yeah. Close to a 6X improvement. I'll take that any day of the week. Can't even buy a single 980 for that, let alone a TitanX.

Cheers.
 
How can -TWO- 970's be drawing such low watts under load??

Have you tried running the Heaven benchmark and monitoring power use, especially if you have windows installed on that mac so you can try SLI mode?
 
LOL. Downloaded it.

3.5 FPS. But I'm running my monitors on the GT120 in slot 4.

As I mentioned above, I don't use the cards for video, just for rendering. I don't play games (at all), have zero interest in em so gaming benchmarks mean zero to me. Render speeds are all I care about as it's the only reason the cards are in the box.

As for the low power draw while rendering I have no idea. May be related to the fact they aren't being used for video just rendering.

When not rendering the cards are drawing:
Power Supply 1 Line 1: 157.6W
PCIE Slot 1 12V Line: 2.8W
PCIE Slot 1 12V Boost Line: 7W
PCIE Slot 2 12V Line: 2.9W
PCIE Slot 2 12V Boost Line 6.3W

Overall power draw confirmed by my Killawatt.
 
Well, its interesting. I think you are right. Rendering as a compute job just doesn't use the wattage of actually generating 3d video.

Sorry for the Heaven request, I tripped up and thought you were using them for video, even though your post clearly said you aren't.. lol.

Sounds like this is pure win for you. As long as the power draw is so low, seems perfectly safe for your mac to me. However, I doubt that your setup would work for gaming, which is a pity.

As a comparison, my evga gtx 680 mac edition, under max load is pulling 66 watts from each boost cable, and 43 watts from the pcie slot. Its perfectly safe, but installing another one with power splitters would probably cause my 5,1 to turn itself off. I wonder if using the 680 as a compute platform also reduces its wattage load.. Regardless, I'm very impressed with these new Maxwell cards from nvidia.
 
Last edited:
Roger that.

Big win, considering back 10 years ago I had 9 macs in a render farm to get basically the same throughput.
 
This is defintely interesting.

I would love to see dual GTX 970s vs dual R9 280X. I'd be looking at FCPX and DaVinci performance.

I like the clean setup without modding the PSU and the general power efficiency.

Obviously your 3.5fps score in heaven is the GT120. Would be interesting to see what you'd get from a single 970.

What brand of 970s did you find to work well?
 
Thought it was listed in the first post.

EVGA GTX 970 SC, and EVGA GTX 970 FTW.

I had problems with the EVGA GTX970 SSC. That one needs an 8 pin and the 6 pin, and even with a dual 6 to single 8 adapter it would not run, and the OS saw it as a 256MB generic NVIDIA card.

Both the 970's were in the $320 ish range (before tax) at Frys. They'll match web prices so at checkout I googled the newer prices for the same cards and they matched em.

Also completed the first full animation render. Previous renders of this type on the single 680 took over 3 days. On the dual 970's it completed 240 frames in less than 12 hours. Oh hells to the yeah.
 
Last edited:
Rendering as a compute job just doesn't use the wattage of actually generating 3d video.
6wle
 
Rendering as a compute job just doesn't use the wattage of actually generating 3d video.Image

Well at least for blender, apparently quite true. In looking at the history in hardware monitor, over the entire 12 hour render the peak for the entire box was 4302W with the average hovering about 380, and the the average for the PCIE Boost lines peaks at 62.2 and 63.9, the PCIE Slots lines peaking at 66.0 and 65.5, with the average PCI Boosts hovering around 55 and the PCI Slots hovering around 58, with all values well under the 75W limit.
 
Would this config works well for Premiere Pro and After Effects?

Unsure, I don't use either of them.

However if their render engines are using cuda under the hood as opposed to OpenCL, my guess would be it would work just fine.

FWIW, I've since ran benchmarks using Luxmark, and placed 13th overall at 17K for dual GPU's, which is an OpenCL app.

What I can say is I've been VERY happy with the pair of these in Blender.
 
Would be interesting to see what you'd get from a single 970.

Maybe the attached shots can help, mind you, it was using a Hack, and at maximum settings at 1440p.
 

Attachments

  • Screen Shot 2015-04-04 at 10.47.51.png
    Screen Shot 2015-04-04 at 10.47.51.png
    81.2 KB · Views: 197
  • Screen Shot 2015-04-04 at 10.47.45.png
    Screen Shot 2015-04-04 at 10.47.45.png
    83.8 KB · Views: 192
Also added a DaVinci Resolve benchmark result for that Standard Candle 10 benchmark.
 

Attachments

  • Screen Shot 2015-04-04 at 10.53.59.png
    Screen Shot 2015-04-04 at 10.53.59.png
    44.3 KB · Views: 205
Thanks @Gwendolini for those uploads.

Are you using the full DaVinci? I've tried to run that Standard Candle test a couple of times, but it just won't import/load with my DaVinci Lite (tried a couple of different versions).

I'll do some benchmarks at your resolution with my 280X. I think it's around 18-19 fps on my 3.33Gz 6-core Westmere.

The gfx card will be the greatest influence of the score of course, but I think you're probably getting a boost for that 4GHz CPU too.
 
Thanks @Gwendolini for those uploads.

Are you using the full DaVinci? I've tried to run that Standard Candle test a couple of times, but it just won't import/load with my DaVinci Lite (tried a couple of different versions).

I'll do some benchmarks at your resolution with my 280X. I think it's around 18-19 fps on my 3.33Gz 6-core Westmere.

The gfx card will be the greatest influence of the score of course, but I think you're probably getting a boost for that 4GHz CPU too.

I am using Resolve Lite, but I had to reassign some media folders to the footage, there is a tutorial of it out there. Cannot remember the correct terms right now.

As for Heaven and CPU see attachment.
 

Attachments

  • Screen Shot 2015-04-04 at 23.09.58.png
    Screen Shot 2015-04-04 at 23.09.58.png
    57.6 KB · Views: 160
I am using Resolve Lite, but I had to reassign some media folders to the footage, there is a tutorial of it out there. Cannot remember the correct terms right now.

As for Heaven and CPU see attachment.

It does not seem possible to edit a post and add another attachment, thus here another one from Activity Monitor.
 

Attachments

  • Screen Shot 2015-04-04 at 23.13.39.png
    Screen Shot 2015-04-04 at 23.13.39.png
    65.3 KB · Views: 176
I am using Resolve Lite, but I had to reassign some media folders to the footage, there is a tutorial of it out there.

Yes, I've got the footage and re-assigning it wouldn't be a problem. It's just that opening or importing that project file draws a blank. Nothing at all opens in the interface. No media with "missing footage" nothing in the color page...

I'll look into it again. I would like to test it since Barefeats use it and he as checked many configurations. It would be useful for evaluating price/performance investments.

----------

As for Heaven and CPU see attachment.

This image can't possibly be a representative average.

I'm open to the idea that modern game design rely much more on the GPU than when I looked at game benchmarks (probably 8 years+ ago), but almost zero usage makes no sense. You second grab with around 30% still surprises me, but OK, that's cool.

There might also be a difference between playing an actual game and reading a relatively short benchmark into memory? Anyway, that doesn't matter when looking at gfx card performance. It's perfect that it doesn't rely too much on CPU.
 
Got the DaVinci test up and running and it switches between 3 and 4 fps evenly every other second. I'd say 3.5 fps.

Ouch. =)

But those 66 blur nodes aren't what Barefeats runs. He runs noise reduction, which is VERY unfortunate since it requires the full DaVinci.

Where can I find a database with results for this test with 66 blur nodes?
 
This image can't possibly be a representative average.

I'm open to the idea that modern game design rely much more on the GPU than when I looked at game benchmarks (probably 8 years+ ago), but almost zero usage makes no sense. You second grab with around 30% still surprises me, but OK, that's cool.

That is 30% out of 800 %, thus 3.75 % CPU usage.

It is a GPU benchmark after all.

My i7 scores around 17,100 Multi-Core Score points and 4,400 Single-Core Score points in the 64-bit Geekbench 3 application.

As for DaVinci and a database, you might have to look for forum entries for the Candle benchmark.
 
I'm a bit bummed! I just ordered the SSC version of this card! Looks like I may be in for some problems.... I might be able to cancel the order before it ships on Monday.
 
Good luck with that. When I went to get the second card they were out of sc's and only had the ssc. As a single card with the 8 pin to 6 pin adapter I still couldn't get it to run under the web drivers. So I took it back and swapped it for the FTW. Both the sc and ftw variants work fine under the web drivers. I've since picked up a couple more sc's to drop into a second 5.1 as a render slave.
 
Good luck with that. When I went to get the second card they were out of sc's and only had the ssc. As a single card with the 8 pin to 6 pin adapter I still couldn't get it to run under the web drivers. So I took it back and swapped it for the FTW. Both the sc and ftw variants work fine under the web drivers. I've since picked up a couple more sc's to drop into a second 5.1 as a render slave.

Do you know if you bought the SSC version or the SSC ACX 2.0+ Version? I bought the latter.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.