Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Tutor

macrumors 65816
Original poster
The New King of FurryBall Rasterization

Nvidia's new GTX 980 is about 1.4x times faster [5.36 secs / 3.75 secs
= 1.43] than a GTX 780 Ti is in FurryBall rasterization (Biased rendering {No GI, No raytracing} Cartoon like rendering for TV series, previz, VFX...) [ http://www.aaa-studio.cz/furrybench...Scene=0&orderByTime=1&orderByID=0&version=4.8 ]. See user ID 4969 and 4970.

The GTX 980 has yet to shine at all in Octane rendering [ http://render.otoy.com/forum/viewtopic.php?f=9&t=42497 ]. The GTX 980's raytracing performance has ways to go - maybe it needs better drivers or CUDA needs tweaking.
 
Last edited:

lowendlinux

macrumors 603
Sep 24, 2014
5,443
6,750
Germany
Tutor from reading the various threads in here it seems some of you stuff runs Linux either full time or part time. You use Geekbench quite a bit to measure performance, have you noticed a Geekbench score difference between OSX and Linux? The reason I ask is I have a D600 with a pair of 5620's and my Geekbench score is 3000 lower than a comparable Mac Pro. When running the benchmark I noticed that some of the cores were dropping out while it was running multi-thread I assume it's simply because it's not optimized for Linux. I'm not a big benchmarker but I would like to see my box keep up with comparable Mac's.
 

Tutor

macrumors 65816
Original poster
Not all OSes or even OS versions are created equal.

Tutor from reading the various threads in here it seems some of you stuff runs Linux either full time or part time. You use Geekbench quite a bit to measure performance, have you noticed a Geekbench score difference between OSX and Linux? The reason I ask is I have a D600 with a pair of 5620's and my Geekbench score is 3000 lower than a comparable Mac Pro. When running the benchmark I noticed that some of the cores were dropping out while it was running multi-thread I assume it's simply because it's not optimized for Linux. I'm not a big benchmarker but I would like to see my box keep up with comparable Mac's.

Yes, I've noticed that Geekbench scores differ between OSX, Windows and Linux. If you'd like for me to give you my thoughts about how to improve your system's performance, then tell me everything that you know about each of its components and the more/most important applications that you use. Also, is your system a true Mac, a Hack, Linux only, Windows only or a combination thereof? The more that you tell me about what hardware you have and the applications that matter most to you, the better applicable will be my advice.

Everyone of my systems runs Linux part-time as I do run a number of Linux-only applications. I do use Linux (especially now Mint), OSX and Windows (Server and 7 Ultimate), along with Geekbench and Cinebench 15 (to the extent possible), to help me to tune my general system tweaks to achieve maximum and safe long term performance from my CPUs and memory. I use Octane Render [plus lately Theo Render and FurryBall] benchmark software and EVGA Precision X [to the extent possible] to help me to tune my GPUs to achieve maximum and safe long term performance. Particularly, with my 32-core/64-thread Sandy Bridge systems, Linux allows them to shine brightest with the highest scores (among other OSes). OSX has had thread limits that were too low - in Mavericks its 32 threads max. With my Nehalem/Westmere systems, OSX allows them to shine brightest (among other OSes) with the highest scores. Windows does not do true justice to either my Nehalems, Westmeres, or Sandy Bridges; but as is the case with Linux, some of my important applications are Windows-only. Generally, when a system has been tuned for maximum performance under one OS, it will likewise be tuned for maximum performance under any other OS. Different OSes may still yield different results, but among other systems running that same OS the system will still shine brightest. Apple did it's best native power management work with Nehalems and Westmeres - I was and am able to get them to emulate the wide (and even wider in most cases) power management bin ranges characteristic of Sandy and Ivy Bridge CPUs. I can achieve bin ranges of 7 to 8 {lowest} and 13 to 14 {highest} on my Nehalem/Westmere Hacks, with my 12-core Westmeres achieving Geekbench 2 scores of 40,050 & 40,100 and Cinebench 11.5 scores of ~ 24.7.
 
Last edited:

lowendlinux

macrumors 603
Sep 24, 2014
5,443
6,750
Germany
Yes, I've noticed that Geekbench scores differ between OSX, Windows and Linux. If you'd like for me to give you my thoughts about how to improve your system's performance, then tell me everything that you know about each of its components and the more/most important applications that you use. Also, is your system a true Mac, a Hack, Linux only, Windows only or a combination thereof? The more that you tell me about what hardware you have and the applications that matter most to you, the better applicable will be my advice.

Everyone of my systems runs Linux part-time as I do run a number of Linux-only applications. I do use Linux (especially now Mint), OSX and Windows (Server and 7 Ultimate), along with Geekbench and Cinebench 15 (to the extent possible), to help me to tune my general system tweaks to achieve maximum and safe long term performance from my CPUs and memory. I use Octane Render [plus lately Theo Render and FurryBall] benchmark software and EVGA Precision X [to the extent possible] to help me to tune my GPUs to achieve maximum and safe long term performance. Particularly, with my 32-core/64-thread Sandy Bridge systems, Linux allows them to shine brightest with the highest scores (among other OSes). OSX has had thread limits that were too low - in Mavericks its 32 threads max. With my Nehalem/Westmere systems, OSX allows them to shine brightest (among other OSes) with the highest scores. Windows does not do true justice to either my Nehalems, Westmeres, or Sandy Bridges; but as is the case with Linux, some of my important applications are Windows-only. Generally, when a system has been tuned for maximum performance under one OS, it will likewise be tuned for maximum performance under any other OS. Different OSes may still yield different results, but among other systems running that same OS the system will still shine brightest. Apple did it's best native power management work with Nehalems and Westmeres - I was and am able to get them to emulate the wide (and even wider in most cases) power management bin ranges characteristic of Sandy and Ivy Bridge CPUs. I can achieve bin ranges of 7 to 8 {lowest} and 13 to 14 {highest} on my Nehalem/Westmere Hacks, with my 12-core Westmeres achieving Geekbench 2 scores of 40,050 & 40,100 and Cinebench 11.5 scores of ~ 24.7.

It's an HP Z600 12GB Crucial (6@2GB) with 2 E5620's. It's a Linux only box that varies between a FirePro v5800 for normal use or 2 GTX 560's for Blender. It runs Arch 64bit with the Mate desktop (read gnome 2). There isn't much you can do I'm locked out of the bios which is fine I'm not much interested in OC'ing it I just wondered if you noticed a difference in scores between OSX and Linux. I'd pondered hackintoshing it so I can work from home I even have a unibeast Mavericks USB stick but after having left the Mac world years ago it's enough to just use my Mac Pro at work I think. Processors have dropped greatly in price of the last year or so, so much that I can actually afford to get some and probably will and that should keep me happy for a while at least.

Thanks much for you time!
 

Tutor

macrumors 65816
Original poster
It's an HP Z600 12GB Crucial (6@2GB) with 2 E5620's. It's a Linux only box that varies between a FirePro v5800 for normal use or 2 GTX 560's for Blender. It runs Arch 64bit with the Mate desktop (read gnome 2). There isn't much you can do I'm locked out of the bios which is fine I'm not much interested in OC'ing it I just wondered if you noticed a difference in scores between OSX and Linux. I'd pondered hackintoshing it so I can work from home I even have a unibeast Mavericks USB stick but after having left the Mac world years ago it's enough to just use my Mac Pro at work I think. Processors have dropped greatly in price of the last year or so, so much that I can actually afford to get some and probably will and that should keep me happy for a while at least.

Thanks much for you time!

In that case, take a look at these:

I. CPUs
1) https://www.eoptionsonline.com/639493-l21.html = x5690s f/ $987.00 ea. or
2) https://www.eoptionsonline.com/591892-b21.html = X5680s f/ $551.25 ea.
Those X5680s appear to be a better value. They should give you 70%-90% CPU performance increase in many highly threaded applications because of the additional cores and higher speeds for all cores [ http://www.cpu-world.com/Compare/834/Intel_Xeon_E5620_vs_Intel_Xeon_X5680.html ].

II. GPUs
1) http://www.newegg.com/Product/Product.aspx?Item=N82E16814487076 = EVGA 04G-P4-2974-KR G-SYNC Support GeForce GTX 970 4GB Video Card f/$350 ea. or
2) http://www.newegg.com/Product/Product.aspx?Item=N82E16814487067 = EVGA 04G-P4-2980-KR GeForce GTX 980 4GB 256-Bit GDDR5 PCI Express 3.0 G-SYNC Support Video Card f/$550 ea.
Depending on the level performance you need for Blender GPGPU rendering, these are probably the best values for significant increases in rendering performance (200% to 400% and with the additional Vram - capacity for many more textures and larger scenes) [ http://www.game-debate.com/gpu/inde...0-vs-geforce-gtx-560-twin-frozr-ii-oc-edition ]. I'm now awaiting the 6-8 gig versions of the GTX 980.

Caveat: Just be aware that further driver updates may be necessary before the GTX900's are truly the GPGPU computing beasts that they are hardware wise and Blender may need to be updated as well.

Update: This might be the one that takes care of the driver issue. http://www.nvidia.com/download/driverResults.aspx/77844/en-us
 
Last edited:

bobah

macrumors newbie
Sep 24, 2013
4
0
Hi Tutor

Can you show us all your rigs? Maybe Panaramic photo?

How you store them?
How you manage them? How many monitors?

Maybe you use laptop and spread your work over all rigs?
 

Tutor

macrumors 65816
Original poster
Hi Tutor

Can you show us all your rigs? Maybe Panaramic photo? How do you store them?

Because they are all heavily GPU config’ed and are, at a minimum, required to be each on a separate electrical circuit, they’re stored in rooms (two systems at most per room) on each of the three floors of my building. A panoramic pic of them all would look like a panoramic pic of some of the following:

1) http://www.newegg.com/Product/Produ...E16811129100&gclid=CISOl9jEjcECFeXm7Aod7Q4AWg ;

2) http://www.tyan.com/Barebones_FT72B7015_B7015F72V2 ;

3) http://www.newegg.com/Product/Produ...cm_re=silverstone_case-_-11-163-185-_-Product ;

4) http://www.superbiiz.com/detail.php?name=SY-847R7FP ;

5) http://www.newegg.com/Product/Product.aspx?Item=N82E16811112390&cm_re=lian_li-_-11-112-390-_-Product ; and

6) cMPs and 2008 MBP.

How you manage them? Maybe you use laptop and spread your work over all rigs?

Usually via my 2008 MBP as the master/controller, but at times, as project(s) dictate(s), in CPU and/or GPU clusters.

How many monitors?

That varies depending on the project(s) from about 1 or 2 to 13 or 16, and all in between.
 

Tutor

macrumors 65816
Original poster
Serendipity - A Supposedly Good Deal Goes Bad, But Is Rescued by Chance

When I was purchasing my GTX Titans last year, I also purchased Arctic Accelero coolers, which have since gone out of production - http://www.arctic.ac/us_en/accelero-hybrid.html . Being satisfied with the performance of my Titans, I hadn’t installed the Arctic Accelero coolers on my Titans.

Recently, I purchased some Zotac GTX 780 Ti OverClock (OC)s (see first pic below with cooler with 3 fans) from Newegg for $430 each [ http://www.newegg.com/Product/Product.aspx?Item=N82E16814500322 ]. When I installed them, all but one of them would trottle down to below 500 MHz because their temperatures would rise to the mid-90 degrees centigrade when rendering. They’d idle close at 50 degrees +. For the price at which I paid for those GPUs, Newegg offers only an exchange - no refund. So I started contemplating sending them back, knowing that what I’d likely get as an exchange would mimic what I had first received.

While Mr. Teeter’s inversion table hung me upside down, I recalled that I had the Arctic Acceleros. As soon as I uprighted myself, I installed an Arctic Accelero on the hottest Zotac. The last/bottom pic below shows the outcome, after I tweaked them some more. That pic was taken in the course of rendering the OctaneRender V1.20 Benchmark scene and those apps on the right show what was being displayed by GPU-z, CUDA-z, CPU-z at that time. I’ve since installed an Arctic Accelero on each Zotac (see pic of one of them below - the second pic).

Now on those Zotac GPUs when rendering, max temp is about 45 degrees, even on texture heavy, large format scenes. Moreover, each of those stocked overclocked GPUs render the OctaneRender V1.20 Benchmark scene in about 42 seconds after I did some additional clock tweaking, as opposed to ~84 seconds for each of my Titans SCs, ~72 seconds for each of my non-Zotac (i.e., EVGA) GTX 780 TIs, and 95 seconds in the case of Barefeats’ Reference Design (RD) Titan. Thus, my Zotacs now render twice as fast as my Titans. So, it seems that those Zotacs really were overclocked; just overclocked too high for their coolers. With the correct cooler, they have a lot more headroom. Now I have to find lots more of those Arctic Accelero coolers for some of my other GPUs.

Update: Couldn't find Arctic Accelero V1 coolers anywhere for less than $350 (I paid $93 ea.). I may have to settle for a $70 Corsair Hydro H55 CPU cooler [ http://www.corsair.com/en-us/hydro-series-h55-quiet-cpu-cooler ] plus a $30 Kraken G10 CPU to GPU converter [ http://www.nzxt.com/product/detail/138-kraken-g10-gpu-bracket.html ].
 

Attachments

  • ZotacCooler.jpg
    ZotacCooler.jpg
    518.3 KB · Views: 145
  • ArcticModifiedZotac.jpg
    ArcticModifiedZotac.jpg
    409.5 KB · Views: 175
  • OctaneOnZotac12844.6MhzCapture.PNG
    OctaneOnZotac12844.6MhzCapture.PNG
    1.6 MB · Views: 188
  • TutorsZotacGTX780TiUnigineScoreCapture.PNG
    TutorsZotacGTX780TiUnigineScoreCapture.PNG
    39.6 KB · Views: 164
  • TutorsZotacGTX780TiUnigineScoreExtremeCapture2.PNG
    TutorsZotacGTX780TiUnigineScoreExtremeCapture2.PNG
    40.4 KB · Views: 180
Last edited:

Tutor

macrumors 65816
Original poster
Looks good Tutor! Including the radiator how many can you actually fit in one system?? Looks like they take up quite a bit of space

DJenkins,

I hope that you and your family are healthy and happy.

The finished mod takes up a little more space than a regular double wide GPU does; so I'll probably have to figure out a way to mod the mods. Right now each of them is about a quarter of an inch too fat. However, as I pointed out in my last post, this specific closed loop cooler (the Arctic Hybrid I) has been EOL'd and every seller (and they are few) now wants about $350 for one of them since the Hybrid II has garnered a reputation for sucking (negatively) big time. The alternative ( a $70 Corsair Hydro H55 CPU cooler [ http://www.corsair.com/en-us/hydro-s...iet-cpu-cooler ] (it has the least height of the Corsair Hydros) plus a $30 Kraken G10 CPU to GPU converter [ http://www.nzxt.com/product/detail/1...u-bracket.htm ) that I mentioned at the end of my late post may fit better than the Arctic Hybrid from the start. Liquid cooling those GPGPUs really allows their capabilities to shine and it should promote their longevity. Thus, I may become the guinea pig, of sorts, again, if for no other reason other than just prolonging the useful life of my GPGPU purchases.
 

Tutor

macrumors 65816
Original poster
GTX 780 TIs render faster under Unigine 4.0 using the Direct3d11 renderer than they do using the OpenCL renderer. These latest result pics show that increasing the render resolution to 2560x1440, unsurprisingly, results in much fewer frames per second being rendered and, accordingly, much lower scores. I tried a number of times to run the Unigine test using a custom setting of 3640x2160 (my Samsung U28D590D's highest resolution = 4k; 133.3kHz @ 60Hz, using DisplayPort connections/cable), but each test (whether using 32 or 16 bit or 256 colors) resulted in the system freezing and then needed to be rebooted.
 

Attachments

  • Unigine2560x1400SettingsCaptureOCL.PNG
    Unigine2560x1400SettingsCaptureOCL.PNG
    967.9 KB · Views: 118
  • TutorsZotacGTX780TiUnigineScoreExtreme2560x1400CaptureOCL.PNG
    TutorsZotacGTX780TiUnigineScoreExtreme2560x1400CaptureOCL.PNG
    35.9 KB · Views: 116
  • Unigine2560x1400SettingsCaptureD3d11.PNG
    Unigine2560x1400SettingsCaptureD3d11.PNG
    965.1 KB · Views: 129
  • TutorsZotacGTX780TiUnigineScoreExtreme2560x1400CaptureD3d.PNG
    TutorsZotacGTX780TiUnigineScoreExtreme2560x1400CaptureD3d.PNG
    35.6 KB · Views: 118
  • DisplayResolution3840x2160.PNG
    DisplayResolution3840x2160.PNG
    91.6 KB · Views: 129
Last edited:

Tutor

macrumors 65816
Original poster
Do The Tweaking Math, Realize That Lunch Isn't Free, And See The Compute Weening

The reference design (RD) GTX 980s and 780s may be only for gamers amd sporatic/short/light renders.

1. The numbers continue to favor a tweaked GTX 780 Ti for GPGPU computing generally.

A. GTX 780 Ti
2880 CUDA Cores * 1284.6 MHz = 3,699,648;

B. GTX 980
3,699,648 / 2048 CUDA Cores = 1775.3 MHz. No way it can be tweaked that much!

Whereas the RD GTX 980 is a substantial computer graphics improvement to the X80s especially like the regular 680s and a good impovement over the regular 780s, I do not currently believe that the RD GTX 980's CUDA computing ability surpasses that of the GTX 780 Ti when both are tweaked to their max.

2. Moreover, consider this from Tomshardware's review [ http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-12.html ] -

"Stress Test Power Consumption

If the load is held constant, then the lower power consumption measurements vanish immediately. There’s nothing for GPU Boost to adjust, since the highest possible voltage is needed continuously. Nvidia's stated TDP becomes a distant dream. In fact, if you compare the GeForce GTX 980’s power consumption to an overclocked GeForce GTX Titan Black, there really aren’t any differences between them. This is further evidence supporting our assertion that the new graphics card’s increased efficiency is largely attributable to better load adjustment and matching.

[Graphic Omitted.]

The values above have potential consequences for the everyday operation of these graphics cards, as they represent what can be expected when running performance-hungry compute-oriented applications optimized for CUDA and OpenCL.

That's not the only offering that makes a good impression, though. Nvidia's reference GeForce GTX 980 does well too, as long as you don’t focus on the idle power measurement. This isn't the same result as custom models with higher power targets (up to 300 Watts for the GTX 980), though, when compute-based stress tests are run. A taxing load just doesn't give Maxwell room for its optimizations to shine.

Note that the GeForce GTX 980's stress test power consumption is actually a few watts lower than the gaming result. This is likely due to throttling that kicks in when we hit the thermal ceiling."

Thus, it appears that Nvida may be weening GPGPU computing to much higher end (and pricier and yet to be announced) Maxwell GPUs that are truly the replacements for the 780Ti, if not solely the Titan. It could be that the pricey Titan Z was a harbinger of Nvidia's new tactic. GPGPU compute rendering is a process that I believe will cause these current/low TDP Maxwells to constantly max out their TDP and to throttle, bringing them to their knees, unless these GPUs are underclocked. These particular RD Maxwells probably will not adversely affect machines, like the stock cMP, when doing heavy/sustained compute tasks such as 3d rendering, but rendering will vastly slow down. Those who do heavy CUDA based rendering may be better served by either (a) getting a non-RD Maxwell variant (with at least one 8-pin power connector) with adequate TDP overhead and sufficiently powering it, (b) getting a GTX 780 or 780 Ti or Titan Black, (c) waiting for the release of Maxwell true replacements of the 780 Ti or Titan (they'll have at least one 8-pin power connector) or (d) getting a Tesla card.
 
Last edited:

DJenkins

macrumors 6502
Apr 22, 2012
274
9
Sydney, Australia
The reference design (RD) GTX 980s and 780s may be only for gamers amd sporatic/short/light renders.

Let's hope they don't divert the product line move the GPGPU/CUDA performance to the workstation cards... leaving the focus for low power draw GeForce GTX cards to the gaming crowd
 

Tutor

macrumors 65816
Original poster
A Lian Li PC-D8000: A Hulking, Expensive Case Designed To Meet Almost All Challenges

Looks good Tutor! Including the radiator how many can you actually fit in one system?? Looks like they take up quite a bit of space

The four Zotacs will be going into one of my two 7 single PCIe slotted (so actually they accommodate 4 double wide PCIe cards) old EVGA SR-2 builds. Those are my antiquated builds that scored 40,100 and 40,050 in Geekbench 2 some years ago. I'm priming them to make a come back in big Lian Li PC-D8000 cases [ http://www.lian-li.com/en/dt_portfolio/pc-d8000/ ].

I'm using my Dremel tools to mod away at the unnecessary projections on the Arctic Accelero's plastic cover. I think that I'll be able to get all four GPUs into one system. This is still a work in progress.

My second concern was essentially the same one you expressed - where could I hang the 4 radiators (one for each GPU) for their closed loop water cooling systems, along with the radiator(s) for my CPU(s) closed loop water cooling system(s)? That's another reason for my selecting my Lian Li PC-D8000 cases for this task. In each Lian Li PC-D8000 system, besides the room for up to 20 storage drives, there's room to install about 16 single radiator closed loop water cooling units. There's space for at least three such coolers on each side of the case; four can be installed at the top of the case; at least five can be installed on the rear of the case since I'm using only one PSU that would occupy a space at the rear of the chassis (additionally, my FSP BoosterX5 is mounted in a drive slot at the front of the case); and there's even a way to install a radiator at the floor of the case if one isn't using two traditional PSUs. Needless to say, the attachment point options for radiators are aplenty (an so are the number and options for simple fan placement), but closed loop system cable lengths can impose their own limitations. With the selection of my Lian Li systems for closed loop water cooling hot components, I can use one of my CORSAIR Hydro Series H100i Extreme Performance Water/Liquid CPU Coolers [ http://www.newegg.com/Product/Product.aspx?Item=N82E16835181032 ] to cool the hotter of the two CPUs in each system (i.e., CPU1); one of my CORSAIR Hydro Series H80i High Performance Water/Liquid CPU Coolers [ http://www.newegg.com/Product/Product.aspx?Item=N82E16835181031 ] to cool the cooler CPU2; and use my Arctic Acceleros ( and alternative GPU coolers to the extent needed - see my preceding related posts, above) to cool my GPUs.

I prefer going the closed loop route to the traditional water cooling route because I believe that it gives me more options to experiment to find the coolest solution for my needs and it's simpler to implement, trouble shoot and maintain. One issue that I am confronting and considering is the optimal radiator placement and radiator fan orientation to maintain all water cooled components, as well as all other case internals, at the coolest temperatures achievable. I'm leaning toward lowest to the ground placement for all radiators and fan orientation to draw cooler room air in. But it's not without downside considerations - by how much will the internal temperatures rise? So, I'll need to exhaust that warmer air out the top of the case as rapidly as possible. There are fans directly over the ram chips and exhaust fans at the top of the case. So bringing in cooler/low to the ground/floor outside air should cool the CPUs and GPUs more (than by sending warmer inside air over the radiators). Since the CPUs and GPUs are the greatest generators of internal heat the internal temperature shouldn't rise that much. But the reverse fan placement may work better to get hot air out of the case in as many ways as possible even if that means blowing it across the radiator that cools the GPUs and CPUs. Another contributing variable is the hughness of the case itself and the large number of places where fans can be placed to bring in cooler, low level room air and to exhaust case higher level warmer air. In the end, I may just experiment with both approaches and maybe a mix of them.

Since this is, at best, a weekend only upgrade, it'll be spread out over time. I need to keep these systems up and running for assignments. I will, however, post the outcome.

P.S. - I will even try flipping the Lian Li case 45 degrees so that the rear of the case will be as if it were now the top; thus air from the fans on the GPUs (as well as that air from the PSU) will go straight up and out towards the 12 foot ceiling. Thus, additionally simulating the thermal benefits of a SilverStone Raven.
 
Last edited:

Tutor

macrumors 65816
Original poster
The EVGA GTX 980C May Very Likely Support Heavy Duty Rendering

Within a few weeks, EVGA should be releasing the GTX 980 Classified with 1291 MHz base clock and 1405 MHz boost clock. This GPU is expected to have two 8-pin power connectors. [ http://videocardz.com/52857/evga-ge...-first-card-to-ship-with-1400-mhz-boost-clock ] Thus, it should have the high TDP needed to avoid throttling under heavy compute tasks, particularly if one underclocks it slightly [aka - robbing Peter (extremely high clocking) to pay Paul (high TDP headroom)]. If I leap at this one, then I'd likely also water cool it with a $70 Corsair Hydro H55 CPU cooler [ http://www.corsair.com/en-us/hydro-series-h55-quiet-cpu-cooler ] (it has the least height of the Corsair Hydros) plus a $30 Kraken G10 CPU to GPU converter [ http://www.nzxt.com/product/detail/138-kraken-g10-gpu-bracket.html ]. The Kraken-Corsair pairing might also help those who have already leaped (if their intent is to do heavy duty rendering). Those Arctic Accelero V1 coolers [see post 1084, above + Arctic Silver 5 paste] helped me to lower temps by over 40 degrees. On the few of my non-H20 cooled GPUs that I've already replaced the factory GPU thermal paste with Arctic Silver 5, it has lowered their temps by at least 10 degrees. That's the same thermal paste that I use for all of my CPUs.
 
Last edited:

Tutor

macrumors 65816
Original poster
MacPro7,1 had better be soon and spectacularly priced and configured.

The prices and specs in the pics below makes me smell there's a MacPro7,1 around the corner. Otherwise, with those options at that price, the 2014 iMac is the 2014 replacement of, at least, the base 2013 MacPro. The new iMac has a faster CPU and a built in 5k display fed by an AMD Radeon R9 M295X 4GB GDDR5, all for a substantially lower price than a closely configured 2013 MacPro. Recently, the highest spec'd iMacs almost uniformly had Nvidia GPUs, but not this time around. Thus, if there's a MacPro7,1, it most likely will not have internal Nvidia GPUs.

After further review of this new iMac, I have to say that this release, at the advertised price points for it and its options, is enough to make Mr. Cheapo consider leaping onto the bandwagon, especially for a video production system if it lives up to the hype.
 

Attachments

  • 27inImacDeckedOutFor4399.png
    27inImacDeckedOutFor4399.png
    482.2 KB · Views: 118
  • Base2013MPCloselyAsPossibleConfigured7894.png
    Base2013MPCloselyAsPossibleConfigured7894.png
    447.5 KB · Views: 133
Last edited:

firsmith

macrumors member
Oct 16, 2014
37
0
so you see no issues with the M295X powering this (I just ordered one but this stuff's over my head)?
 

NOTNlCE

macrumors 65816
Oct 11, 2013
1,087
476
Baltimore, MD
How about Xserve?

So, yesterday, I was given an Xserve 1,1 with a full storage array for free. I plan on using it as a file server for the many macs in my home. However, I own a Mac Pro 1,1 and used to own a 3,1 and I know how hot they run. My question is, will the Xserve 1,1 accept the 5300 series Xeons? They're socket and chipset compatible, just like in a Mac Pro, but I want to be sure. Also, how much benefit would I get from using Low Voltage Xeons? Will that helpnthe thermal output?
Thanks.
-N

EDIT:

A bit of googling seems to show the 5300s are OK. Hoping the low voltage models are good.
 
Last edited:

Tutor

macrumors 65816
Original poster
So, yesterday, I was given an Xserve 1,1 with a full storage array for free. I plan on using it as a file server for the many macs in my home. However, I own a Mac Pro 1,1 and used to own a 3,1 and I know how hot they run. My question is, will the Xserve 1,1 accept the 5300 series Xeons? They're socket and chipset compatible, just like in a Mac Pro, but I want to be sure. Also, how much benefit would I get from using Low Voltage Xeons? Will that helpnthe thermal output?
Thanks.
-N

EDIT:

A bit of googling seems to show the 5300s are OK. Hoping the low voltage models are good.

I'm not aware of anyone who has done that upgrade and Everymac doesn't have an answer [
http://www.everymac.com/systems/apple/xserve/specs/xserve-intel-xeon-2.0-quad-specs.html ]. My only concerns would be EFI support and thermals, given the low profile of the Xserve. If you have found someone who has done the Xserve 1,1 - 5300 upgrade, get further information from them. SMC fan control should help with the thermals. Low voltage CPU models will also help, but by how much I'm unable to quantify.
 

Tutor

macrumors 65816
Original poster
8G VRAM GPUs from AMD and Nvidia soon to drop

Within the next couple of months, both AMD and Nvidia will be releasing 8G VRAM versions of their latest top of the line gaming cards [ http://videocardz.com/53912/msi-radeon-r9-290x-gaming-8gb-pictured and http://www.kitguru.net/components/g...0-with-8gb-of-memory-in-november-of-december/ ]. From AMD it'll be, at least, the 8G 290x and from Nvidia it'll be, at least, the 8G GTX 980. For those doing CUDA renders, EGVA's higher powered/TDP [Classified] offerings may be preferred to avoid throttling and cores dropping out. In a cMP, that means an additional power source will likely be necessary to get the full benefit from the new EVGA 8G GTX 980 Classified or the current EVGA GTX 980 Classified.
 
Last edited:

Tutor

macrumors 65816
Original poster
Is it one of the world's best deals or a brilliant attempt to steal?

Given the problems with Nvidia's latest Maxwell GPU offerings when it comes to Octane rendering, I embarked on a journey to find some high-end Keplers to further enhance my rendering capacity. This is what I found: http://www.amazon.com/EVGA-GeForce-...SN4C/ref=aag_m_pw_dp?ie=UTF8&m=A19VSG28HIWUQ8 . Since I've already ordered some at that unbelievable price because Amazon has a good buyer protection program, I'll soon let you know the answer to my title question.

P.S. 1- My intent is to liquid cool them.
2 - "KE" in my sig. stands for Kepler Core Equivalent, where 1 Fermi core = 3 Kepler cores. Of course, I give Kepler cores a 1-to-1 equivalency.
 
Last edited:

Tutor

macrumors 65816
Original poster
I can never have too many PCIe slots.

Thanks to EVGA/Newegg, here's Stage 1 of AlphaCanisLupis1's makeover. I prefer lots of PCIe slots for my self-builds, especially as provided by the Tyan barebones server series. With two GTX Titan Zs 6G (water-cooler installed) already in place, I still have room for two PCIe storage cards, a GT 640 4g for interactivity during scene design and rendering, and a SATA (4 port) card for raid 0 external storage, further leaving me room for 4 more GTX Titan Zs (water-cooled). Three additional Titan Zs should be arriving this week, with the remainder arriving within the next week or two, along with my other parts to build my water-cooled systems. Twelve of those GK 110Bs [2 per card] provide 48,732 GFLOPS of single precision floating point peak performance and 16,242 GFLOPS of double precision floating point peak performance (via 34,560 Kepler CUDA cores). I also got 2-way SLI Bridges for the Titan Zs for working with 4K(+) video so that I can set up a bridge between each of two Titans and fully drive multiple Samsung UltraHD displays. The Titan Z acquisitions allow me to pass my eight oTitan Superclocked Edition 6Gs on to a couple of my 4 double wide PCIe Gigabyte/SilverStone systems, and on and on. Doing this upgrade on my other Tyan will allow me to pass my GTX 780 6Gs down to another couple of my 4 double wide PCIe Gigabyte/SilverStone systems, and on and on.

P.S. I'm also considering putting two water-cooled GPUs in my two 32-core systems, as well as water-cooling their 4xE5-4650s. Moreover, while I'm water-cooling GPUs in the Tyans, I will, more likely than not, water-cool their 5680s. I believe that water cooling the CPUs in all of these systems may allow me to better tweak their clocking through the back door, since I cannot directly set their clock speeds directly in bios. In bios, I can, however, set C-state parameters and durational and wattage limits for energy performance bias and running the CPUs a lot cooler may just do the trick if I significantly raise wattage limits for short duration power limits and raise the short duration time value, and conversely lower long duration wattage limits and lower the long duration time value.
 

Attachments

  • A.jpg
    A.jpg
    877.3 KB · Views: 229
Last edited:

NOTNlCE

macrumors 65816
Oct 11, 2013
1,087
476
Baltimore, MD
Thanks, Tutor, for all your hard work.

I've actually read through the entirety of this thread over the past few days (a couple of overtime shifts at work), and with my extra funds (ha, $30), I've acquired two Xeon L5335s and popped them in my Xserve. The Xserve took them fine, from what I can tell. The thermals are about what they were with the dual 2.0GHz Dual Cores, but the performance is significantly better, and noise about the same.
The brick actually runs Yosemite very well with an aftermarket video card in its spare PCIe slot (NVIDIA GeForce 405 512MB - half height, low profile video card.) http://www.amazon.com/NVIDIA-GeForc...p/B008TUFBYI/ref=cm_cr_pr_product_top?ie=UTF8 [Other slot occupied by a Fiber Card I got for a steal on Ebay.]
(Thanks again to Piker Alpha for his continued work on the boot.efi hack that allows these old machines to persevere.)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Next project is to add an electrical outlet to the closet under my basement stairs and get a nice 6U rack for the Xserve and the Xserve RAID, and hope that calms the wind turbine that is currently residing next to my bed. I'm sure this is more than what I need for the uses it'll be getting, but this forum has brought new life to this old box, and for that, I am grateful.
-N
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.