Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Sedor

macrumors member
Dec 28, 2013
64
0
Germany
Hey tutor i just bought one :)

awaiting for a new psu as my rm1000 won't cut powering 4 x 580's... hopefully 5 cards will be recognised :)
one in the mac and 4 on the stand, il let you know how i go.

also 2 negatives for anyone thinking about it
1. its not very interchangeable the bottom 2 cards are locked in you have to dis assemble the unit
2. in the bottom slots you can only fit reference thick gpus. so anything over the standard size won't fit in the bottom.

Great! Cant wait hearing if it works. I am also planning to get one of that GPU Oriented cluster.
 

ravTOM

macrumors newbie
Jan 12, 2015
2
0
Hi Mr.Tutor...Need your help mate...I was thinking of buying gtx980 for OCTANE rendering...But looking at some of your vital posts I decided not to buy it....I rather thought of buying 780ti for my work....
Request you to tell me if 780ti is a better choice over 980 or OCTANE guys will ever alter their code to better utilize MAXWELL technology and I should buy 980 if at all in near future MAXWELL can prove to be a better choice with newer and better integration with OCTANE...
I need to buy the GPU within a week so i greatly appreciate any quick response from you.....Thanks in advance
 

Sedor

macrumors member
Dec 28, 2013
64
0
Germany
Sedor,
Please keep us updated on how things go with the Amfeltec.

I'll do, but that will be later this year - first I am going to get another graphics card (looking for a Titan Z) and then I'll see and get one of that Amfeltec's - testing "Grid" will then be a mixture of GTX680/780/980/Titan Z.

Good thing is that the official Amfeltec dealer in Germany isn't so far away from my home.
 

Tutor

macrumors 65816
Original poster
Hi Mr.Tutor...Need your help mate...I was thinking of buying gtx980 for OCTANE rendering...But looking at some of your vital posts I decided not to buy it....I rather thought of buying 780ti for my work....

If you plan to do 4k rendering, I'd recommend finding a used GTX 780 6G or used original Titan or used Titan Black because they have 6G Vram. The 780ti, which has 3G of Vram, is about 1.3x faster than the GTX 780 6G and original Titan, and just a tad faster than a GTX Titan Black at 3d rendering. So that's a choice you have to make based on your rendering needs and budget.

Request you to tell me if 780ti is a better choice over 980 or OCTANE guys will ever alter their code to better utilize MAXWELL technology and I should buy 980 if at all in near future MAXWELL can prove to be a better choice with newer and better integration with OCTANE...

From my last research of the issue, the 780ti is still faster and less problematic [ http://render.otoy.com/forum/search.php?keywords=GTX+980 ] in Octane than the GTX980. The OCTANE guys have been altering their code to better utilize Maxwell technology, but that's still a work in progress. As to what the future holds, that exceeds my pay grade.

----------

I'll do, but that will be later this year - first I am going to get another graphics card (looking for a Titan Z) and then I'll see and get one of that Amfeltec's - testing "Grid" will then be a mixture of GTX680/780/980/Titan Z.

Good thing is that the official Amfeltec dealer in Germany isn't so far away from my home.

Just remember that for most purposes involving 3d rendering, the active GPU with the least amount of memory sets the standard for all other active GPUs. So if you had a couple of active 4G cards and an active 3G card and an active 6G card, that the rendering software will most likely be limited to 3G renders, unless you deselect the 3G card; then the software will be limited to 4G renders, unless you deselect those 4G cards. If the project fits within 3G of Vram, then you can use them all.
 
Last edited:

ravTOM

macrumors newbie
Jan 12, 2015
2
0
Thanks TUTOR...Seldom find such helpful people..
I hope Octane adapts soon....I just checked the compute benchmarks of 980 againts 780ti on ANANDTECH and apparently 980 fairs really well in terms of computing power...

I m going to use it a lot for particle dynamics (x-particles & TFD) and I think 980 shud give me a better performance at that..

I hope I could rely on 980 for GPU renders too in time to come...

Really excited to see some of ur work TUTOR..If you could share some of your work links....:):)
 

philliplakis

macrumors member
Nov 19, 2014
97
1
AUS
Sure, please do let me know how it goes. I have 5xGTX580s, but all 5 of mine are the Classified Ultra 3Gs that have 2x8-pin +1x6-pin power connectors [= 75 watts for the PCIe slot + 2x150 watts for both 8-pins connectors + 75 watts for the 6-pin connector for a grand total of 450 watts per card!]. Thus, I'd need 1800 watts of power for just four of them to be able to run at their max TDP, not to mention needing even power more to fully over-clock them.

Great! Cant wait hearing if it works. I am also planning to get one of that GPU Oriented cluster.

So here it is...

Absolutely NO way of running 5 cards :(

osx will boot but the 5th will say DISPLAY and octane will cause a crash...
tried multiple cards and configs

4 will have to do...

here i have a GTX 680 4gb internal and 1x780ti 2x780 1x680 on the cluster.

i tried 580's all flashed 680's no chance of 5 working...

do you think windows could recognize the 5?

also tutor... the 780ti i have is the base evga one could i change the bios to the super clocked version?



ps. not the prettiest/neatest setup.. but at 1/3 of the price of other expansion options it will do..

pps. I'm thinking of getting an old g5 case and customising it to house the cluster....
 

Attachments

  • IMG_4108.JPG
    IMG_4108.JPG
    1.1 MB · Views: 218
  • IMG_4109.JPG
    IMG_4109.JPG
    1.3 MB · Views: 213
  • IMG_4110.JPG
    IMG_4110.JPG
    1.3 MB · Views: 203
Last edited:

Tutor

macrumors 65816
Original poster
So here it is...

Absolutely NO way of running 5 cards :(

osx will boot but the 5th will say DISPLAY and octane will cause a crash...
tried multiple cards and configs

4 will have to do...

here i have a GTX 680 4gb internal and 1x780ti 2x780 1x680 on the cluster.

i tried 580's all flashed 680's no chance of 5 working...

do you think windows could recognize the 5?

What model MacPro are you using?
I'm experiencing similar issues with my Tyan mod: it currently appears that for the Tyan Server that 8 GPU processors is the limit whether its 8 Titans or 4 Titan Zs, but I' still working on pinpointing the source of the problem; then, hopefully I can find a solution. I'm not sure that Windows wouldn't recognize all five of your GPUs ( if what you're referring to is using BootCamp), but I'm beginning to doubt that GPU limitations are an OS issue alone. It appears to be more of a system hardware issue. But, I'm aware of an individual using an Amfeltec chassis with a Supermicro Server. His system recognizes all ten of his GTX 780 Ti GPUs, but Octane Standalone and LW/Octane recognize only seven 7 of them [ http://render.otoy.com/forum/viewtopic.php?f=23&t=44209&p=217929#p217929 ].

also tutor... the 780ti i have is the base evga one could i change the bios to the super clocked version?

I not sure whether and how you can over clock it, except maybe by running in Bootcamp, then maybe using a utility such as EVGA Precision X or MSI Afterburner. However, keep in mind that the Keplers will run in boost mode if their temperature is kept low.


ps. not the prettiest/neatest setup.. but at 1/3 of the price of other expansion options it will do..

pps. I'm thinking of getting an old g5 case and customising it to house the cluster....

1) The other expansion options, such as the Cubix, aren't without their own issues.
2) Please keep in mind that If you put the chassis in a fully enclosed case, you'll most likely lose one important benefit of the chassis: Your GPUs on the external chassis most likely run cooler under full load by being almost fully exposed to ambient air. Thus, their being in a fully enclosed case could reduce their performance - their speed may not boost as high and, in fact, may begin to throttle downward under heavy use.
 

Tutor

macrumors 65816
Original poster
Recent estimated pricing for Amfeltec clusters

Here's what I recently got for quotes (in USD):

"... . Please find below quotation per your request.
Quotation (Prices in USD)
----------------------------------------------------------------------------------------------------------------
SKU Item Qty Unit Price
----------------------------------------------------------------------------------------------------------------
SKU-078-11 GPU-Oriented PCIe Cluster (5ft cable) 3 $ 362.48 USD
[1 Clusters (up to 4 GPUs) + 1 channel Host board]
Freight Shipping and Handling ( 3 units) $ 99.84 USD
FedEx Air

SKU-078-21 GPU-oriented PCIe Cluster (5ft cable) 4 $ 372.71 USD
[1 Clusters (up to 4 GPUs) + 2 channel Host board]
Freight Shipping and Handling ( 4 units) $ 116.46 USD
FedEx Air

SKU-078-22 GPU-oriented PCIe Cluster (5ft cable) 1 $ 578.78 USD
[2 Clusters (up to 8 GPUs) + 2 channel Host board]
Freight Shipping and Handling (1 unit) $ 78.46 USD
FedEx Air

SKU-078-41 GPU oriented PCIe Cluster (5ft cable) 5 $ 412.48 USD
[1 Clusters (up to 4 GPUs) + 4 channel Host board]
Freight Shipping and Handling (5 units) $ 129.09 USD
FedEx Air

SKU-078-42 GPU oriented PCIe Cluster (5ft cable) 6 $ 618.55 USD
[2 Clusters (up to 8 GPUs) + 4 channel Host board]
Freight Shipping and Handling (6 units) $ 128.42 USD
FedEx Ground

SKU-078-43 GPU oriented PCIe Cluster (5ft cable) 7 $ 824.62 USD
[3 Clusters (up to 12 GPUs) + 4 channel Host board]
Freight Shipping and Handling (7 units) $ 199.39 USD
FedEx Ground

SKU-078-44 GPU oriented PCIe Cluster (5ft cable) 2 $1030.69 USD
[4 Clusters (up to 16 GPUs) + 4 channel Host board]
Freight Shipping and Handling (2 units) $ 93.05 USD
FedEx Ground

For 10ft cable for SKU-078-xx please add $12.75 USD per PCIe Cluster

----------------------------------------------------------------------------------------------------------------- ...."

My understanding of what the quotes mean, e.g., is that price is for each one unit of SKU-078-11 up to three of them is $362.48 USD (excluding shipping) for each of them and then a further discount may be given for larger purchases, but I'm not absolutely sure that my understanding is 100% correct. So, you're advised to get Amfeltec to answer any questions that you have about their quotes.
 
Last edited:

Tutor

macrumors 65816
Original poster
Even After Defeat, My Life Goes On

1) The 8 GPU processor brick wall: My Tyan Server has 8 double wide GPU slots. Until recently, I had hoped that I could load the Tyan with 8 double wide GPU cards and have it recognize all of the GPUs. My goal was to use Titan Z cards to exceed 8 GPU processors and to have 14 GPU processors in one system. Having tried bios mods, Regedit hacks and various card stack configurations, the hardware will not recognize more than 8 GPU processors. So having multiple GPUs on one card does not get around the 8 GPU limit because it's processor based. I can put either a maximum of 8 single GPU processor GTX cards ( such as 780 Tis, 780 6G, original Titans, Titan Blacks, etc.) in it, or 4 dual processor GTX cards (such as 590s, 690s or Titan Zs), or a combination of single and dual processors cards in it so long as I do not exceed the 8 GPU processor limit. All of the other processors above 8 are wasted there. Because of that discovery/realization, I've decided to put 2 Titan Zs (4 GPU processors total), plus 1 Titan Black (1 GPU processor), plus 3 Titans (3 GPU processors total) (4+1+3=8) in my Tyan Server. So, I assess my accomplishment of my goal here as a major failure.

2) Mr. Freezer begging for attention: While so far devoting all of my time for this project to the 8 GPU processor limit, I haven't had the time to find the optimal settings for Mr. Freezer. Disappointingly, I haven't seen temps for coolant exiting Mr. Freezer below 20 degrees centigrade. Usually when I've noticed the coolant temps exiting Mr. Freezer and entering my reservoir, the temps are around 21-23 degrees centigrade. The system was then recognizing only 8 of the 14 GPU processors, but for rendering Octane benchmarks and other render jobs the processors' temperatures (read by GPU-Z ) never exceeded 35 degrees Centigrade and averaged 33 degrees centigrade. At idle the temperatures of the GPUs processors were the within 1-2 degrees centigrade of the coolant temperature as it exited Mr. Freezer (so the GPUs temps ranged from about 22 to 25 degrees centigrade). I was and am shooting for an average of around 5 degrees centigrade at idle and an average of around 10 degrees centigrade under load. So, I assess my accomplishment of my goal here as, currently, trending toward major failure with a tiny semblance of promise.

3) Flopping about: GPU-Z readings of single precision floating point peak performance are (A) slightly in excess of 4,500 GFLOPS for each of my old Titan SCs and (B) slightly in excess of 5,000 GFLOPS for my Titan Black and and for each GPU of my Titan Zs. I've yet to over clock any on them for these tests. So after reconfiguring my GPU cards in my Tyan server as I've indicated above, I fully intend to achieve a single system floating point peak of 38,500 (5x5,000 + 3x4,500) GFLOPS for the Tyan system before I even overclock any of the the GPUs. Moreover, 38,500 GFLOPS equals 38.5 PFLOPS (petaflops).*/ However, my goal was to achieve a single system single precision floating point peak performance of at least 56 PFLOPS. So, I assess my accomplishment of my goal here as, currently, strongly trending towards complete failure, with the degree of completeness of failure dependent significantly on how no. 2, above, shakes out since my ability to safely over clock the GPUs is affected by their temperatures.

4) Pulling it apart and configuring, but this time not from scratch - so it's reconfiguring: Luckily, I had the foresight to buy and spread quick disconnects throughout this mod, buy two dual pumps (one as a replacement - luckily too I don't need it yet) and buy lots of tubing and non-conductive coolant. So, I'll be reconfiguring the Tyan as set forth above. I'll be installing the other four Titan Zs (40 of the PFLOPS that I had intended to all be in one system) in one of my SilverStone cases, on a 4xPCIe double wide slotted GUP4 motherboard. Both of these systems will be water cooled by the same cooling system. I am glad that I, at least, had a plan in place for potential failure(s). Life goes on, regardless.


*/I. On June 10, 2013, China's Tianhe-2 was ranked the world's fastest single computer with a record of 33.86 petaflops. [ http://en.wikipedia.org/wiki/FLOPS#Single_computer_records ]
 

DJenkins

macrumors 6502
Apr 22, 2012
274
9
Sydney, Australia
Even After Defeat, My Life Goes On

Gee I don't think you can beat yourself up too much! Such barriers would come with the territory when challenging boundaries of what can be achieved within a single machine. I guess you've been doing it your whole life right?!

I think Otoy use an infiniband setup for their rig. Is that an option you could consider? I don't know much about it though and this could go against the goal of having an all-in-one machine :(

With mr freezer it seems the fluid is not spending enough time in there to become chilled. Is there a way to extend the time period that the fluid takes to travel through the freezer? Or a more conductive method rather than air chilling? Keeping the distance between the freezer output and the pc as short as possible would help too.

I hope you can come through with something from your efforts!
 

Tutor

macrumors 65816
Original poster
Gee I don't think you can beat yourself up too much! Such barriers would come with the territory when challenging boundaries of what can be achieved within a single machine. I guess you've been doing it your whole life right?!

I think Otoy use an infiniband setup for their rig. Is that an option you could consider? I don't know much about it though and this could go against the goal of having an all-in-one machine :(

With mr freezer it seems the fluid is not spending enough time in there to become chilled. Is there a way to extend the time period that the fluid takes to travel through the freezer? Or a more conductive method rather than air chilling? Keeping the distance between the freezer output and the pc as short as possible would help too.

I hope you can come through with something from your efforts!

DJ,

Thanks for all of your thoughts and suggestions.

I've considered infiniband, but many of its benefits are limited to the Tesla GPUs which are currently out of my price range for the performance derived from using them.

Also, I'll consider further how I might keep the fluid in the freezer longer.
 

Tutor

macrumors 65816
Original poster
FollowUp on post # 1186

Since I've yet to be able to enable my Tyan Server a/k/a WolfPackAlphaCanisLupus0 to recognize more than eight GPU processors, I am currently not able to reach my goal of having a single system GPU score of 56 petaFLOPS. Instead, I am able currently to reach only a single system floating point peak GPU score of 46,312.12 GFLOPS or 46.31 PFLOPS*/, using two Titan Zs, one Titan Black and three Titans (my original SC/OC versions) for a total of eight slightly overclocked GPUs. I've yet to water-cool two of the Titan SC/OCs, so there still may be a little room for improvement.

*/ " As of November 2014, China's Tianhe-2 supercomputer is the fastest in the world at 33.86 petaFLOPS (PFLOPS), or 33.86 quadrillion floating point operations per second." [ http://en.wikipedia.org/wiki/Supercomputer ]
 

Attachments

  • WolfPackAlphaCanisLupus - SP floating point peak performance - 46,312.12 Gflops.PNG
    WolfPackAlphaCanisLupus - SP floating point peak performance - 46,312.12 Gflops.PNG
    110.2 KB · Views: 171
Last edited:

Tutor

macrumors 65816
Original poster
WolfPackAlphaCanisLupus0 On High Octane

Here's a pic showing how WolfPackAlphaCanisLupus0 now performs in Octane Render, rendering the benchmark scene in 2 sec. (DL).



BTW - Temps of water-cooled GPUs (overclocked) are about 26 degrees centigrade at idle and < 34 degrees centigrade while rendering. See pics below, showing Min = Idle and Max = Loaded.
 

Attachments

  • WolfPackAlphaCanisLupusOnOctaneBenchMarkScene1.20.PNG
    WolfPackAlphaCanisLupusOnOctaneBenchMarkScene1.20.PNG
    1.3 MB · Views: 180
  • CaptureTempsMaxRendering.PNG
    CaptureTempsMaxRendering.PNG
    23.8 KB · Views: 183
  • CaptureTempsMinIdle.PNG
    CaptureTempsMinIdle.PNG
    24.3 KB · Views: 186
  • CaptureTempsMaxRenderingTitanZ.PNG
    CaptureTempsMaxRenderingTitanZ.PNG
    24.3 KB · Views: 190
Last edited:

Sedor

macrumors member
Dec 28, 2013
64
0
Germany
SKU-078-11 GPU-Oriented PCIe Cluster (5ft cable) 3 $ 362.48 USD
[1 Clusters (up to 4 GPUs) + 1 channel Host board]
Freight Shipping and Handling ( 3 units) $ 99.84 USD
FedEx Air

The latest price I found here was about 540Euros (ca. USD610)... I think I'll buy one of that clusters next month.

----------

Just remember that for most purposes involving 3d rendering, the active GPU with the least amount of memory sets the standard for all other active GPUs. So if you had a couple of active 4G cards and an active 3G card and an active 6G card, that the rendering software will most likely be limited to 3G renders, unless you deselect the 3G card; then the software will be limited to 4G renders, unless you deselect those 4G cards. If the project fits within 3G of Vram, then you can use them all.

Yes, I have that in mind - for the first days/weeks my mixed setup of 4GB and 6GB card(s) must be okay, currently I am thinking that my final setup will be of 4x Strix GTX780OC with 6GB VRAM running at the cluster and the 980 (4GB) as internal card... of course only if we get 5 cards running...
 

Tutor

macrumors 65816
Original poster
So here it is...

Absolutely NO way of running 5 cards :(

osx will boot but the 5th will say DISPLAY and octane will cause a crash...
tried multiple cards and configs

4 will have to do...

here i have a GTX 680 4gb internal and 1x780ti 2x780 1x680 on the cluster.

i tried 580's all flashed 680's no chance of 5 working...

do you think windows could recognize the 5?

also tutor... the 780ti i have is the base evga one could i change the bios to the super clocked version?



ps. not the prettiest/neatest setup.. but at 1/3 of the price of other expansion options it will do..

pps. I'm thinking of getting an old g5 case and customising it to house the cluster....

Have you tried the following Regedit mod (from the Octane User Manual) to get Octane to properly recognize all of your GPUs? I've had to do it with almost every one of my systems.


"Issue 9. Windows and the Nvidia driver see all available GPU's, but OctaneRender™ does not.

There are occasions when using more than two video cards that Windows and the Nvidia driver properly register all cards, but OctaneRender™ does not see them. This can be addressed by updating the registry. This involves adjusting critical OS files, it is not supported by the OctaneRender™ Team.

1) Start the registry editor (Start button, type "regedit" and launch it.)

2) Navigate to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}

3) You will see keys for each video card starting with "0000" and then "0001", etc.

4) Under each of the keys identified in 3 for each video card, add two dword values:
DisplayLessPolicy
LimitVideoPresentSources
and set each value to 1

5) Once these have been added to each of the video cards, shut down Regedit and then reboot.

6) OctaneRender™ should now see all video cards."
 
Last edited:

Tutor

macrumors 65816
Original poster
WolfPackAlphaCanisLupus0 On High Octane (Cont'd)

Here's a pic showing how WolfPackAlphaCanisLupus0 now performs in Octane Render (after some additional GPU clock tweaking), rendering the benchmark scene in 1 sec. (DL), along with other pics of renders (and their render times) from the Octane Demo package.













O
 

Attachments

  • 8GPUOctaneRenderBenchmark-1sec.PNG
    8GPUOctaneRenderBenchmark-1sec.PNG
    1.3 MB · Views: 212
  • 8GPUOctaneRenderScrews-4 sec (DL).PNG
    8GPUOctaneRenderScrews-4 sec (DL).PNG
    1.5 MB · Views: 212
  • 8GPUOctaneRenderHall-2 sec (DL).PNG
    8GPUOctaneRenderHall-2 sec (DL).PNG
    1.3 MB · Views: 213
  • 8GPUOctaneRenderChestSet(Proc)2 sec (DL).PNG
    8GPUOctaneRenderChestSet(Proc)2 sec (DL).PNG
    1.5 MB · Views: 209
  • 8GPUOctaneRenderChestSet(IM)1 sec (DL).PNG
    8GPUOctaneRenderChestSet(IM)1 sec (DL).PNG
    1.5 MB · Views: 204
Last edited:

Tutor

macrumors 65816
Original poster
WolfPackAlphaCanisLupus0 On High Octane (Cont'd)

More direct lighting (DL) renders and their render times from the Octane Demo package.
 

Attachments

  • 8GPUOctaneRenderScatter-61 sec (DL).PNG
    8GPUOctaneRenderScatter-61 sec (DL).PNG
    1.8 MB · Views: 180
  • 8GPUOctaneRenderSpaceships 33 sec (DL).PNG
    8GPUOctaneRenderSpaceships 33 sec (DL).PNG
    1.4 MB · Views: 205
Last edited:

Tutor

macrumors 65816
Original poster
WolfPackAlphaCanisLupus0 On High Octane (Cont'd): Sizes May Matter

Higher Resolution Always Matters.

In contrast to that 1 sec rendering time for the Octane Benchmark windowed scene depicted above in post no. 1195, a full aperture 4k (4096 x 3112) rendering of that scene took 37 seconds and weighs in at 54.8 MB.

In contrast to that 1 sec rendering time for the Octane Chess_set imagemapped (IM)windowed scene depicted abive in post no. 1195, a full aperture 4k (4096 x 3112) rendering of that scene took 42 seconds and weighs in at 50.1 MB.

In contrast to that 2 sec rendering time for the Octane Chess_set procedural (Proc) windowed scene depicted above in post no. 1195, a full aperture 4k (4096 x 3112) rendering of that scene took 42 seconds and weighs in at 52.3 MB.

In contrast to that 2 sec rendering time for the Octane Hallway windowed scene depicted above in post no. 1195, a full aperture 4k (4096 x 3112) rendering of that scene took 17 seconds and weighs in at 55.5 MB.

In contrast to that 4 sec rendering time for the Octane Screws windowed scene depicted above in post no. 1195, a full aperture 4k (4096 x 3112) rendering of that scene took 1 min and 6 sec seconds and weighs in at 55.5 MB.

In contrast to that 33 sec rendering time for the Octane Spaceship windowed scene depicted above in post no. 1196, a full aperture 4k (4096 x 3112) rendering of that scene took 9 min. and 43 seconds and weighs in at 44.8 MB.

In contrast to that 1 min 1 sec rendering time for the Octane Scatter windowed scene depicted above in post no. 1196, a full aperture 4k (4096 x 3112) rendering of that scene took 16 min. and 51 seconds and weighs in at 56.7 MB.

For purposes of GPU card comparison, I decided to employ the Octane Scatter scene:
1) One of my tweaked Titan Z Hydros (2xGK110B processors) rendered a full aperture 4k (4096 x 3112) image from that Scatter scene in one hour, 4 min and 6 sec.;
2) My tweaked Titan Black Hydro (1xGK110B processor) rendered a full aperture 4k (4096 x 3112) image from that Scatter scene in two hours, 7 min and 31 sec.; and
3) ) One of my tweaked Titans (non-hydro cooled by air) (1xGK110A processor) rendered a full aperture 4k (4096 x 3112) image from that Scatter scene in two hours, 38 min and 27 sec.

As earlier posted, these renders all employ Octane's Direct Lighting (DL) option.

Note well that although it may appear that my Titan Black Hydro performs better than one GPU processor of my Titan Z Hydro, I do not believe that is the case because all of my Titan Zs have a lot more tweakability still left in them, whereas my Titan Black doesn't. Moreover, my Titan Z Hydros appear to have a wider native boost range. In sum, I believe that the Titan Zs have better binned GK100Bs and that they were underclocked much below the Titan Black level because of thermal/TDP issues that don't exist in my water-cooled/Hydro systems.
 
Last edited:

Tutor

macrumors 65816
Original poster
WolfPackAlphaCanisLupus0 On High Octane (Cont'd): OpenCL

Here's how WolfPackAlphaCanisLupus performs OpenCL rendering using LuxMark's OpenCL tests:

1) Sala - 20,371;
2) Luxball - 125,194 and
3) Room - 8,348.
 

Attachments

  • MyRig+SalaScore+LuxBall+Room+MSI.PNG
    MyRig+SalaScore+LuxBall+Room+MSI.PNG
    743 KB · Views: 180
Last edited:

Tutor

macrumors 65816
Original poster

Attachments

  • WolfPackAlphaCanisLupusOpenGL.PNG
    WolfPackAlphaCanisLupusOpenGL.PNG
    366.5 KB · Views: 169
  • FurmarkCaptureFull.PNG
    FurmarkCaptureFull.PNG
    451.2 KB · Views: 148
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.