Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Tutor

macrumors 65816
Original poster
Judging by the video release, pretty sure it's a triple-wide card. So, two at the max with some of your HDD bays removed. Even if you could fit three, I'm not sure how you'd power them all without a second aux PSU hack.

Personally, I'm not that enamored with the Titan Z. You get 2x the Titan Black performance for 3x the price. For the same amount of money, you could actually fit 3 x Titan Black's in your oMP and have 2,880 more CUDA cores working. Or have 3 x 780 6GB for half the price of a Titan Z and have 1,152 more CUDA cores. Seems like a better use of Octane rendering funds, no?

Thanks Riggles. Very perceptive of you. I missed the video. You deserve and I gave you an up. A three slot occupant does more to dry my salivation. At best, I could install only four of them and one Titan Black Edition (25,920 CUDA cores total) into my otherwise 8 double wide GPU slotted Tyan server for about $13k or I could install 8 Titan Black Edition SCs for about $8.08K (23,040 CUDA cores), saving me about $4.9K, but leaving me with 2,880 fewer CUDA cores. 23,040 CUDA cores happens to be the same number of total CUDA cores that I have at present in my 8 GTX 780 Ti ACXs that cost about $250 less per GPU card than the Titan Black Edition. The Titan Black Edition does have twice the amount of vram (6 gigs vs. 3 gigs), however.

Note that in the video that Nvidia uses the price for 4 Titan Zs (4x$3K=$12k) whereas they appear to be comparing only three of them to attain the processing power of the Google brain system aggregation.

Late summer, not a bad timeline at all. I'm curious to see how performance scales and whether it's worth the $$$ investment.

Leon, you'd better thank Riggles because he's just saved you $60K.;) You deserve and I gave you an up for your rapid notice about the new GPU.
 
Last edited:

echoout

macrumors 6502a
Aug 15, 2007
600
16
Austin, Texas
Great use of a tool. Like you, I also use GPU rendering for motion graphics elements for video applications. GPU rendering is not only sufficient, but is perfect for the vast majority of jobs that I see day to day.

Man, and with Octane coming to After Effects, I've never felt better about my render rigs. Stoked!
 

DJenkins

macrumors 6502
Apr 22, 2012
274
9
Sydney, Australia
Man, and with Octane coming to After Effects, I've never felt better about my render rigs. Stoked!

You better not be joking... I need to know about this!!! :eek:

----------

..it was right under my nose on the Otoy news page:

OTOY also announced application support for Adobe After Effects,
demonstrating the ability to import a complete 3D scene into the After Effects timeline which can
then be edited in real-time.

So it's integrated in a similar way that Cinema 4D Lite works with Adobe CC. Pretty cool, but I got all excited like I could use GPU for all AE renders haha :eek:
 

Tutor

macrumors 65816
Original poster
You better not be joking... I need to know about this!!! :eek:

----------

..it was right under my nose on the Otoy news page:



So it's integrated in a similar way that Cinema 4D Lite works with Adobe CC. Pretty cool, but I got all excited like I could use GPU for all AE renders haha :eek:

But how will one (after importing a complete 3D scene into the After Effects) edit it in real-time in After Effects? Will the 3D raytracing engine based on NVIDIA OptiX be put to use? [ http://www.nvidia.com/content/tesla/pdf/gpu-accelerated-applications-for-hpc.pdf ]

And,

I. In the recently released version 1.5 of Octane, there're, among others:

1) Alembic file and Lua scripting support, allowing users of the standalone version only, i.e., those without the application specific plugins, to have animation import, review, tweaking and real-time rendering right in the OctaneRender Viewer [ http://raytracey.blogspot.co.nz ];

2) Group visibility for fade-in and -out effects;

3) A new time slider for viewing animated geometry and camera data with camera motion blur;

3) A new node system for scene management and navigation [Step 2 in becoming more procedural a little like Houdini]; and

4) the .ORBX 3d interchange file format - it allows all aspects of a 3D scene, such as geometry, materials, properties, textures, lighting, transform hierarchies and cameras, to all be stored in a self-contained file format (Thanks Autodesk and Mozilla), allowing an artist or animator to import and export very complex 3D scenes across 15 modeling programs, such that they will appear in the same final render quality as they did in the 3d program used to originally created them.

[ http://render.otoy.com/newsblog/ ] AND

II. OctaneRender 2.0 will also include, among others:

1) Displacement mapping that allows the height of points on a surface to be adjusted based on an image's value.

2) Object motion blur that can be applied independently of the camera movement.

3) Hair and fur (a) with reduced memory usage by up to 20 times compared to previous methods and (b) by simulating the distribution and fluidity of movement.

4) OpenSubDiv surfaces for fast refinement of subdivision surfaces (Thanks go out to Pixar).

5) Rounding edges of geometric objects without modifying and reloading the them.

6) Random color texture for instances that would otherwise be the same color.

7) Better sky rendering by using textures as sky backgrounds and by ensuring that objects in the scene accurately reflect the loaded sky texture.

8) Further node enhancements - Node folder structure that allows users to create packages of nodes, node graphs, and node trees, and organize them in folders along with thumbnails for easy identification (becoming somewhat more procedural Houdini - Thank you).

9) Region rendering to save time on test renders.

10) Network rendering that will then be officially supported (Finally and thank you Otoy - for its about time).

[ http://www.wset.com/story/25072358/...further-advancing-the-science-of-cg-rendering ]

III. Add to these developments the upcoming stronger performing GPUs with more vram and one would be have pressed not to admit that things are just getting better and better for visual artists using OctaneRender.
 
Last edited:

sirio76

macrumors 6502a
Mar 28, 2013
571
405
OctaneRender 2.0 will also include, among others:

1) Displacement mapping that allows the height of points on a surface to be adjusted based on an image's value.

2) Object motion blur that can be applied independently of the camera movement.

3) Hair and fur (a) with reduced memory usage by up to 20 times compared to previous methods and (b) by simulating the distribution and fluidity of movement.

5) Rounding edges of geometric objects without modifying and reloading the them.

6) Random color texture for instances that would otherwise be the same color.

7) Better sky rendering by using textures as sky backgrounds and by ensuring that objects in the scene accurately reflect the loaded sky texture.

9) Region rendering to save time on test renders.

10) Network rendering that will then be officially supported (Finally and thank you Otoy - for its about time).

VrayRT can do all of that.. today;)
 

fiatlux

macrumors 6502
Dec 5, 2007
351
139
A challenge for Tutor: build a rig with new FirePro W9100 cards... and 6 times as many 4K displays :eek:

Not much use for CUDA acceleration, and on paper it does not seem to match the Titan (reg/black/Z) Perf/Price ratios, but it should deliver for OpenCL.

How many could you fit in a Mac Pro?
 

Tutor

macrumors 65816
Original poster
Ups earned

VrayRT can do all of that.. today;)

Thanks sirio76. VrayRT is and has been in the house and will be soon covered more here, especially the RT aspects and how to maximize them.


A challenge for Tutor: build a rig with new FirePro W9100 cards... and 6 times as many 4K displays :eek:

Not much use for CUDA acceleration, and on paper it does not seem to match the Titan (reg/black/Z) Perf/Price ratios, but it should deliver for OpenCL.

How many could you fit in a Mac Pro?

Thanks fiatlux for the update. IF I were to build one, it would be after OpenCL gets a lot more than lip service from the application developers whose applications I use most. I suspect that you could get three W9100 cards in the oMacPro if my analysis (from photos of them being only double, not triple, wide GPU cards) is correct. Currently, in helping out a dear friend complete a compute based assignment, the experience is reinforcing in me the need to have at least one card in the main GPU compute system that is dedicated to interactivity - something that I've used the cheap GT 640s for in the past and will continue to do. The card used for interactivity has slot needs to. So in a system like the oMacPro, that has at most enough slots for either 3 double wide cards or two double wide cards and a single wide card, if the system is being used to control a compute job and to participate in it, I'd recommend that one forgo three expensive double wide GPU compute cards for one cheap single wide card and two double wide cards in the oMacPro, unless, of course, if one goes to the added expense of using an external chassis. Also, if you're mainly using them to drive high density displays, I suspect that two of those W9100s will be the sweet spo, because you're going to need room for lots of and fast storage to feed those GPUs. So the topmost slot will likely be home to a fast storage card.
 
Last edited:

Tutor

macrumors 65816
Original poster
Squeezing more life from oMacPros - Choices - Food for thought.

Are there any oMacPro (like the 2007s for instance) owners with large and fast storage needs plus large GPU compute needs and the need for a x1 slot for a low demand single wide card? If there are, wouldn't a good comprise be to use two GPU compute cards (one placed in PCIe slot 1 [x8] and the other placed in PCIe slot4 [x8]), and having storage in Bay 1 and the remainder of storage in the lower portion of the optical drive bay? A FPS Booster (to help power the GPUs) could go above them (see my pics and earlier posts, above, about how to get three double wide cards in an oMacPro). That would leave you with two empty PCIe slots - slot 2 [x8] for a fast storage card and slot 3 [x1] for a minimal needs card.
 
Last edited:

riggles

macrumors 6502
Dec 2, 2013
301
14
Are there any oMacPro (like the 2007s for instance) owners with large and fast storage needs plus large GPU compute needs and the need for a x1 slot for a low demand single wide card? If there are, wouldn't a good comprise be to use two GPU compute cards (one placed in PCIe slot 1 [x8] and the other placed in PCIe slot4 [x8]), and having storage in Bay 1 and the remainder of storage in the lower portion of the optical drive bay? A FPS Booster (to help power the GPUs) could go above them (see my pics and earlier posts, above, about how to get three double wide cards in an oMacPro). That would leave you with two empty PCIe slots - slot 2 [x8] for a fast storage card and slot 3 [x1] for a minimal needs card.

That's the kind of thing I was thinking of. The only hiccup is finding a powerful enough card that fits in a single slot. If I'm doing the kind of work where I want two dedicated compute cards (which typically means some sort of 3D or DCC work), I would likely be using a large display or two and need a good amount of VRAM for fluid interactivity with scene files. The only cards that really fit that description (and fit in one slot) are workstation cards like the Quadros. Those cards, on principle, bug me, but they do fill a need. So something like a K4000, which has 3GB of VRAM and can drive high-poly scenes across multiple displays well, is like $750+.
 

fiatlux

macrumors 6502
Dec 5, 2007
351
139
Thanks Tutor for the quick reply. My question was rhetorical as I mostly use my MacPro for photo editing and storage, and currently live with a GT120, which is perfectly OK if you don't play, don't do any 3D or any other GPU computation. But being a techy guy at heart, I like researching options and would not mind benefiting from GPU acceleration as it makes it's way in mainstream apps.
 

Tutor

macrumors 65816
Original poster
That's the kind of thing I was thinking of. The only hiccup is finding a powerful enough card that fits in a single slot. If I'm doing the kind of work where I want two dedicated compute cards (which typically means some sort of 3D or DCC work), I would likely be using a large display or two and need a good amount of VRAM for fluid interactivity with scene files. The only cards that really fit that description (and fit in one slot) are workstation cards like the Quadros. Those cards, on principle, bug me, but they do fill a need. So something like a K4000, which has 3GB of VRAM and can drive high-poly scenes across multiple displays well, is like $750+.

The GPU that I use for interactivity is the $80-$110 EVGA GT 640/4 gig and to top it off it has a little bit of CUDA power, but it only about 384 Kepler cores which approximates a little over 100 Fermi cores or about 1/10th of an old GTX Titan. So when interactivity is no longer needed, you can add it to the mix. It's a respectable video card, given its VRAM amount, but it's not for gaming.
 

Tutor

macrumors 65816
Original poster
Innovation

Wave of the Future?

Macs (even the nMP) are invited to take advantage of CUDA rendering prowess via OpenCL 1.2 GPU runtime for NVIDIA GRID. Someday soon you could be doing your 3d design and rendering from a nMP running an Autodesk product through OSX via the internet.

An innovation collaboration between Nvidia (GPU Grid rendering), Otoy (maker of OctaneRender), Autodesk (3d Software maker) and Amazon (cloud host) have teamed up to bring about 3d design and rendering as a service.

Check out these URLs and particularly the enclosed videos:

http://autodesk.blogs.com/between_t...esk-applications-in-my-web-browser-today.html

https://area.autodesk.com/blogs/cory/running-autodesk-software-on-amazon-with-otoy

https://area.autodesk.com/blogs/cory/running-autodesk-software-in-a-web-browser-with-amazon-and-otoy

http://architosh.com/2014/03/autode...cloud-workstation-autodesk-edition-at-amazon/

http://architosh.com/2014/03/autode...oud-workstation-autodesk-edition-at-amazon/2/
 

riggles

macrumors 6502
Dec 2, 2013
301
14
The future is now

Wave of the Future?

Macs (even the nMP) are invited to take advantage of CUDA rendering prowess via OpenCL 1.2 GPU runtime for NVIDIA GRID. Someday soon you could be doing your 3d design and rendering from a nMP running an Autodesk product through OSX via the internet.

An innovation collaboration between Nvidia (GPU Grid rendering), Otoy (maker of OctaneRender), Autodesk (3d Software maker) and Amazon (cloud host) have teamed up to bring about 3d design and rendering as a service.
Yup, you can already do CUDA rendering on a nMP through your browser.

One of the big roadblocks in my mind about cloud rendering is the bandwidth bottleneck. If one of the things holding back GPU rendering for some people is the limited VRAM ceiling (say 6GB for a Titan), how's that gonna work when the PCIe 3.0 x16 pipeline is replaced with standard broadband upload speeds? Unless you're streaming everything (Maya/Max, scene assets, texture library, everything) from EC2, the connection bottleneck is going to be a thing.

It's also a pain still to get an EC2 instance up-and-running. I've done it for a few jobs, but it's not at all "artist friendly". So you either wait for it launch and set it up every day, or you pay a small ransom to keep them going 24/7.

So, there's still some things to work out.
 

iMacmatician

macrumors 601
Jul 20, 2008
4,249
55
Thanks Riggles. Very perceptive of you. I missed the video. You deserve and I gave you an up. A three slot occupant does more to dry my salivation. At best, I could install only four of them and one Titan Black Edition (25,920 CUDA cores total) into my otherwise 8 double wide GPU slotted Tyan server for about $13k or I could install 8 Titan Black Edition SCs for about $8.08K (23,040 CUDA cores), saving me about $4.9K, but leaving me with 2,880 fewer CUDA cores. 23,040 CUDA cores happens to be the same number of total CUDA cores that I have at present in my 8 GTX 780 Ti ACXs that cost about $250 less per GPU card than the Titan Black Edition. The Titan Black Edition does have twice the amount of vram (6 gigs vs. 3 gigs), however.
You get more cores with the TITAN Z + TITAN Black solution but the TITAN Z's cores are clocked lower than the TITAN Black's.

TITAN Black: 2880 CCs, 889 MHz, 5.12 TFLOPS,
TITAN Z: 5760 CCs, around 700 MHz, 8 TFLOPS.

The clock speed is neither officially given nor finalized (at least when it was announced), but estimated from the 8 TFLOPS number. So you would get

8x TITAN Black = 40.97 TFLOPS,
4x TITAN Z + 1x TITAN Black = around 37 TFLOPS.

You do get more total memory and bandwidth though with the Z option, since the TITAN Z doesn't sacrifice in those aspects compared to the TITAN Black.
 

Tutor

macrumors 65816
Original poster
You get more cores with the TITAN Z + TITAN Black solution but the TITAN Z's cores are clocked lower than the TITAN Black's.

TITAN Black: 2880 CCs, 889 MHz, 5.12 TFLOPS,
TITAN Z: 5760 CCs, around 700 MHz, 8 TFLOPS.

The clock speed is neither officially given nor finalized (at least when it was announced), but estimated from the 8 TFLOPS number. So you would get

8x TITAN Black = 40.97 TFLOPS,
4x TITAN Z + 1x TITAN Black = around 37 TFLOPS.

You do get more total memory and bandwidth though with the Z option, since the TITAN Z doesn't sacrifice in those aspects compared to the TITAN Black.

Thanks for your insights. In between jobs I'm reconfiguring my systems for the addition of more 3d rendering capacity. My render farm is about to have 21 clock tweaked GPU/CPU rendering systems w/room for 9 more double wide GPUs for TE potential of >70 old RD Titans. When all of these additions are acquired, I'll then have a SP Tflop potential of > 315, including the current oRD Titan Octane Render equivalency that I now have of about 58 with 57,984 clock tweaked Kepler and 28,448 clock tweaked Fermi cores (28,448 Fermi cores = ~ 85,344 Kepler cores; so in Kepler core terms, I have a total core count equivalent to about 143,328 Kepler cores [(28,448 * 3) + 57,984 = 143,328]). Currently, there are 96 Nehalem/Westmere clock tweaked CPU cores and 110 clock tweaked Sandy/Ivy Bridge CPU cores, for a total CPU core count of 206. The CUDA SP Tflop potential is > 251 for the GPUs. As for my ATI Steam Processing Units, the Open CL SP Tflop potential is > 28.136. I'm looking at adding a trio of GTXs by this year's end. More than likely they will not be of the Z variety. But I'm not even sure whether they'll even have the word "Titan" in their name should the 6G vram version of the 780 Ti SC ACX or C ACX be released.

P.S. For the price (under $250), performance, tweakability, multiOS ability, ease of use and stability (so long as one immediately upgrades the bios to version F5 - I just downloaded F7 so I have no opinion on it yet ), so far I'm having a wonderful time with my GA-X79-UP4 (rev. 1.1) motherboards. They each allow me to populate them with 4 GPUs that give me the compute ability that I need for minimal cost. However, there were complaints that bios version F4 was not ready for prime time.
 
Last edited:

Tutor

macrumors 65816
Original poster
That's the kind of thing I was thinking of. The only hiccup is finding a powerful enough card that fits in a single slot. If I'm doing the kind of work where I want two dedicated compute cards (which typically means some sort of 3D or DCC work), I would likely be using a large display or two and need a good amount of VRAM for fluid interactivity with scene files. The only cards that really fit that description (and fit in one slot) are workstation cards like the Quadros. Those cards, on principle, bug me, but they do fill a need. So something like a K4000, which has 3GB of VRAM and can drive high-poly scenes across multiple displays well, is like $750+.

riggles,

I forgot to mention that Compeve usually has very competitive prices for powerful single (and double) slot GPUs cards [ http://members.ebay.com/ws/eBayISAPI.dll?ViewUserPage&userid=compeve_corp ]. Don't hesitate to make them an offer on something that you want to purchase for less than their original asking price.
 

Tutor

macrumors 65816
Original poster
Where might the Titan-Z and its $3k price tag fit?

Many of the creators of GPU-based rendering solutions recommend Nvidia’s GTX CUDA based lineup over the Tesla cards. So I would not relegate the Titan-Z to “entry level compute.” Fortifying my observation that Titan-Z is not “entry level compute” is the fact that there are a whole host of less capable GTX CUDA cards that can be had for less money and that satisfy many content creators’ needs. Without question, Titan-Z has more compute power than any other single GTX card and any announced or released Tesla card. But the question is where does it fit, for whom does it fit and who will pay the $3k passage fee. It doesn’t fit me for what I do right now. But things can and, I suspect, will change. What and how many gamers and content creators are going to spend $3k on a 3 PCIe slotted GPU, when the other lower priced and capable GTX solutions exist. Below, I’ve posited one candidate. Are there others? Might the card also be touted by Nvidia as a top end gamer’s card because that’s perceived by Nvidia as a large market? Where’s 5K gaming? Looks to me like Nvidia might have constructed a hole, from ground up, about itself by pricing the Titan-Z at $3k. With earlier generations of duo GPU cards, 99 dollars and/or 99cents was used to stay under certain price points. E.g., the GTX 690 was $999.99. But that card cost a lot more than the GTX 590’s $699 price [ http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units ] and the GTX 590 cost a lot more than the GTX 295’s $499 price [ http://www.tomshardware.com/reviews/geforce-gtx-295,2107-11.html ]. There was no duo GTX 400 and the 300 line was not much to speak of and it too didn't include a duo GPU card. My point is that historically, duo GPU Nvidia cards have costs more each generation. But does this fly in the face of what we see happening to the lowering of prices of other electronics over time. And although supplying a duo GPU card (with apparent added performance) certainly ought to justify some increase in price over the single GPU card version, does there come a point at which the added price begins to border on the ridiculous or have real appeal to only a very, very small user population, such that all it really does is make the statement, “See what we can do for you if you can afford it.”

There is, at least, one sort of individual workstation system where a Titan Z makes perfect sense. That’s the PCIe slot limited system like one using the Supermicro (“SM”) Dax line of motherboards - http://www.supermicro.com/products/motherboard/Xeon1333/#top . It’s one of the few motherboard series made by SM with any overclocking ability and its incredibly stable overclocking. The Sandy Bridge CPU Family (including the Ivy Bridges) can be overclocked at most 1.0755%. The folks that I know with Dax motherboards have almost, without exception, been able to get the highest overclock possible using the Dax line. The Dax boards sell from the low $500s to the high $800s, depending on the special added features, but, so far, there’s really only 1 full double wide x16 PCIe slot on any of them, unless you put the GPU in a case that has more slots at the bottom of the case fully below the motherboard for the GPU to breathe if you put the GPU in the lowest/2nd X16 slot. The two X16 slots are side by side with no real slot space between them and the slots lay at the end of the board (see "quick view" pics in URL). For someone who wants the fastest two CPU Sandy/Ivy Bridge system (say - dual Intel Xeon E5-2697 v2 [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon E5-2697 v2.html ] overclocked to run 24 cores at 2.90 GHz base (vs. 2.7 GHz) and one from each CPU at 3.76 GHz at max turbo (vs 3.5)*/, with a powerful 5076 CUDA core GPU, the Dax + Titan Z combo would be one of the faster 2 CPU/GPU self-build combinations on the market. Superbiiz sells the Dax barebones system [ Supermicro SuperWorkstation SYS-7047AX-72RF Dual LGA2011 1280W 4U Rackmount/Tower Server Barebone System (Black) ] for $1,758 [ http://www.superbiiz.com/detail.php?name=SY-74772RF ]. Because it’ll hold up to 512 gigs of ram, setting up a huge ram disk (e.g. 384 gigs) that would be many times faster than a PCIe based solution, shouldn’t be a problem.

DAX CPU
A properly cooled system would likely show, via CPU-z or the like, that an overclocked Dax system under almost any light load will be running 24 cores at 3.23 GHz. Moreover, I project that such a system would achieve a Geekbench 3 score in excess of 57,000.

Titan-Z GPU
A Titan-Z will likely be about about 2x faster than the GTX 780 Ti. A GTX 780 Ti (renders the Octane Benchmark scene in 72 sec.) is about 1.32x faster than the original reference design Titan (with the GK110A) (renders the same Octane Benchmark scene in 95 sec.) in 3d rendering with OctaneRender. Therefore, a single GTX Titan-Z should be about 2.64 times faster than the original Titan in CUDA rendering performance. Thus, it should render the same Octane Benchmark scene in 36 sec. As an aside, two GTX Titan-Zs should render that scene in 18 sec., four Titan-Zs should render that scene in 9 sec. and eight Titan-Zs should render that scene in 4.5 sec. That’s the way linearity works in Octane.


*/ The full breath of turbo stages should shake out like this per CPU [ http://www.tomshardware.com/reviews/ivy-bridge-ep-xeon-e5-2697-v2-benchmarks,3585-2.html ] going from highest GHz to lowest:

1) one core - 3.5 GHz standard vs. overclocked on Dax - 3.76 GHz;
2) two cores - 3.4 GHz standard vs. overclocked on Dax - 3.66 GHz;
3) three cores - 3.3 GHz standard vs. overclocked on Dax - 3.55 GHz;
4) four cores - 3.2 GHz standard vs. overclocked on Dax - 3.44 GHz;
5) five cores - 3.1 GHz standard vs. overclocked on Dax - 3.33 GHz and
6) six to twelve cores - 3.0 GHz standard vs. overclocked on Dax - 3.23 GHz .

So in a dual E5-2697 CPU Dax-based system one could like get turbo stages like this:
1) two cores - 3.76 GHz;
2) four cores - 3.66 GHz;
3) six cores - 3.55 GHz;
4) eight cores - 3.44 GHz;
5) ten cores - 3.33 GHz and
6) twelve to twenty-four cores at 3.23 GHz.

P.S. The Supermicro barebones system appears to have the necessary space below the last/bottom X16 slot on the motherboard to allow the installation of a triple wide GPU, such as the Titan-Z, in that last X16 slot.
 
Last edited:

Tutor

macrumors 65816
Original poster
SuperMicroMacStrosities

What's are SuperMicroMacStrosities? They're otherwise known as WolfPackPrimes. SuperMicroMacStrosities have personality disorders because one day they might be under the control of Billy Gates, the next day by Linny Torvalds, and the next day by Timmy Cook. Each SuperMicroMacStrosity has 32 monstrous, wicked and shocking brains that are seen by outsiders as one indecipherable head. But actually, each SuperMicroMacStrosity has four heads (foreheads ?) that currently appear only to the viewer as one head. Each of the four heads has 8 brains (4 [E5-4650 ES QBED] x 8 = 32). Each of the four heads, under Windows and Linux, has the performance under Geekbench 3 equal to that of a seven core nMP if there was such a creature - its score is right in the middle of that of the six and eight core nMPs. The Linux GB3 score for a whole system is >70,000. Each SuperMicroMacStrosity has great memory skills and even greater memory potential. For the time being, I'm keeping them in my laboratory for further mutilations and malformations. More news, including benchmarks, to come.
 

Attachments

  • AboutThisMac1.png
    AboutThisMac1.png
    45.7 KB · Views: 88
  • AboutThisMac5.png
    AboutThisMac5.png
    157.6 KB · Views: 137
  • AboutThisMac2.png
    AboutThisMac2.png
    69 KB · Views: 109
  • AboutThisMac3.png
    AboutThisMac3.png
    61.5 KB · Views: 111
Last edited:

DJenkins

macrumors 6502
Apr 22, 2012
274
9
Sydney, Australia
What's are SuperMicroMacStrosities?

It's aliiiveeee!!!! :D :D

A few times I'd dropped hints to see if you'd attempted running OSX, but guessed it wasn't possible with a 4 x CPU machine.

Eagerly awaiting multi core cinebench/geekbench results for OSX.

There are still tasks not suited for a multi GPU system - processing RED footage, and rendering in After Effects still crys out for maximum CPU power in a single machine.
 

Tutor

macrumors 65816
Original poster
What a confident heart believes - hard work achieves after a prepared mind conceives.

Or it could be that the old man (b) is just lucky or (c) gets his belated 60th birthday gift from the Father or (d) benefits from a bit from all of this or (e) benefits from a bit from all of this and more.

It's aliiiveeee!!!! :D :D

Yes, it's true that both of them are alive, but they each have to undergo rigorous examination/tweaks. Also, I'm considering creating a third one. I have an extra 4 Xeon E5-4650 QBED ES CPUs (getting them all for at or under $500 each was one of my best buys) and the price of the barebones SuperMicro 4 CPU deskside chassis/mobo combo has dropped below $2k [ https://www.superbiiz.com/detail.php?name=SY-847R7FP ]. Furthermore, I'd like to get the ram allotment in each of them to 256 or 384 gigs at a minimum. I may move the 128 gigs currently in the second one to the first one to get 256 gigs of ram in the first one and just buy 512 gigs of ram for the second one and later buy 512 gigs of ram for the third one. Then I can have a 128 gig automated ram disk in the low memory one (i.e., the one with only 256 gigs of ram) and have 384 gig automated ram disks for the other two. Luckily each mobo has 32 ram slots. If the price of the 32 gig sticks fall sufficiently, I would only need 16 of them initially to get to 512 gigs and I would have 16 empty ram slots for another 16x32 gigs (or possibly 16x64 gig sticks - giving me more that enough ram to build an automated 1,024 gig ram disk instead of a 512 gig ram disk).

A few times I'd dropped hints to see if you'd attempted running OSX, but guessed it wasn't possible with a 4 x CPU machine.

I got the hints. Because I had not done the necessary preparation for this undertaking before trying this once last year, I failed. I needed to maximize my fleshy cores, by hyperthreading them with more knowledge gained from further research. I also needed to turbo boost my fleshy cores with more common sense and reflection. Also, patience, itself, can be virtuous: waiting for 10.9.2 was a good reason to delay.

Eagerly awaiting multi core cinebench/geekbench results for OSX.

Hopefully, I'll have have them tweaked for that exhibition by the end of next weekend (probably not this one because I've got other commitments).

There are still tasks not suited for a multi GPU system - processing RED footage, and rendering in After Effects still crys out for maximum CPU power in a single machine.

Yes, and there are others, particularly for doing none-GPU assisted renders or hybrid CPU/GPU renders with TheaRender or others. Therefore, after the 3 four CPU systems are completed, I'll begin planning for an eight CPU Frankenstein monster system with 2,048 gigs of ram (1,024 gigs of which will be dedicated to a 1,024 gig automated ram disk) and 120 CPU cores, running every OS worthy of being run on it. But as you should know by now, my conservationist tendencies are rampant, particularly when it comes to $$$$. So, I'll have to factor in minimizing costs and maximizing profits at each and every stage as I further maximize the performance of my render farm.
 
Last edited:

Tutor

macrumors 65816
Original poster
... .Eagerly awaiting multi core cinebench/geekbench results for OSX. ... .

Here're a couple of starters. I have yet to get video working in high resolution or onboard ethernet working and HDMI Audio output (there's no built-in audio outs on this Supermicro model). Also, I haven't yet enabled hyper threading or turbo boosting. That's why the scores are so low [compared to my Windows and Linux scores] for this $7.4k machine.
 

Attachments

  • 1stCB15score4SuperMicroMacstrosity1.png
    1stCB15score4SuperMicroMacstrosity1.png
    997.5 KB · Views: 111
  • 1stGB3score4SuperMicroMacstrosity1.png
    1stGB3score4SuperMicroMacstrosity1.png
    81.3 KB · Views: 117
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.