Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I was looking into buying the 2.26GHz 8 core mac pro but after your comment it seems way over priced. I'll continue waiting for Sept. and hopefully a free speed bump upgrade will come.
 
Do any of those applications allow distributing rendering tasks to other computers on the network?

If so, what about building an overclocked hackintosh for helpling with your rendering?

EDIT: I see you've already considered that. I really don't think there's much more you can do with your Mac Pro until someone creates an app that will allow us to adjust the reference clock from within OSX. Hell will buckle under the weight of ice before Apple allows EFI tweaks! :D
 
That's what I'm doing since the probability is minuscule of there being in Hades a volume of water (even with global warming's melting of the polar ice caps and perma-frost - not to mention the rarity of that liquid's attainment of solid state in such an inhospitable environment) sufficient to reach the critical mass and temperature necessary to create the buckling effect that precedes the handing down of the stone tablets containing the ten EFI commands, ushering in an era of complete Mac satisfaction. In sum, I agree with you - it ain't happening.

LOL
 
When I am ready (probably when I am done school in my Masters in Architecture in April 2011), my 2 2.26's will likely be placed into a rendering node like what you've done. I want to replace them with the new 3.33's :D, or who knows what will be able to be placed into that board by then, maybe a dual octo core proc (nehalem ex iirc) would work.....
 
I need lots of rendering capacity---lots of it. So after swapping out what seemed to be two too slow Nehalem 2.26 Ghz Xeons from my 8-core 2009 Mac Pro, I replaced those Xeons with two 3.2 Ghz Xeons ( https://forums.macrumors.com/threads/713934/ ). I still needed more rendering nodes for pro apps such as Cinema 4d, Maya, Compressor, Shake and AfterEffects. Not to be wasteful - I needed a home or homes for those two orphaned 2.26 Ghz chips. But because I needed those chips to be more productive, overclocking them was a necessity - which required venturing to the dark side, i.e. Windows and Hackintoshes. No server MOBO served my overclocking needs so I had to find two homes where overclocking was welcomed - one for each 2.26 Ghz chip. Below are the specs and costs for thoses houses, each of which is physically identical and appointed such that OSX can reside there too ( http://www.insanelymac.com/forum/index.php?showtopic=149505&st=80 ). The rendering performance after minimal overclocking, as shown below, was well worth the $2800 expenditure. More overclocking is on the horizon.

What are your plans for the seeming slow Nehalems you have already removed or plan to remove in the near future from your Mac Pro? Are they up for adoption or will you find a use for them?
Are you looking for further ways to improve the performance above what you've managed from Over Clocking? Or just interested in discovering possible uses for the unused CPU's lying around?
:confused:
 
Both. One should always learn better overclocking techniques, as well as other techniques in various other areas of life and how to take them to their limits. One should also not pass up on great deals - one person's unused Nehalem CPU can be the basis of another's next cheap rendering node. Plus, I'm just curious.
:cool:

The biggest thing you can do now for system throughput, is make sure you've adequate drive throughput and memory. Graphics too, but to a lesser extent IMO.

Memory's easy, well sort of, you just need to figure out how much is really needed for your applications.

Drive througput is the biggest bottleneck in a system these days, and can be greatly improved via some sort of RAID solution. You can go either software based or hardware based implementations. (Fake RAID is sort of in the middle, but I don't care for it, as it can have more problems than any other methodology in my experience. Ask gugucom about his attempt to use a RR2642 to boot windows in a stripe set).

I and others can help if you like, but some details are needed (quite a few actually), so let me know if your interested, and we can go from there. :)
 
Question for you, OP

Is there a reason to choose the 3.2Ghz (965) over the 3.32Ghz (975) Nehalem xeons? Cause I'm looking at my sources the later are slightly cheaper!! :eek:
 
Note well that with a single core Cinebench render score of 4393 CB for one of my now overclocked 2.26 Ghz Xeons and 4405 CB for the other, a single overclocked 2.26 Ghz Nehalem Xeon out performs every Mac in single core mode on Tesselator's list ( http://tesselator.gpmod.com/Images/_Equipment_n_Tutorials/Cinebench10_Numbers.jpg ) (the highest being 4074 CB for a 2009 2.93 8-core Mac Pro) as well as the single core performance on my modified 8-core 3.2 Ghz 2009 Mac Pro (4246 CB under OSX - https://forums.macrumors.com/threads/713934/ ). Only my 4712 CB score on my 8-core 3.2 Ghz Mac Pro in Bootcamp surpasses the overclocked 2.26's. My point is that the performance increase we see in the Nehalem Macs, while significant, is a mere drop in the bucket compared to what Windows users are seeing. There's serious CPU capacity not being used by Macs and being hidden from us. Why can't Apple give us (who tenders it princely sums for systems with the moniker of "Pro" in the name) a widget to safely take advantage of this hidden, but admittedly variable, potential. Today, sports teams proudly announce slogans like "We Care" and then show us a player doing something to benefit a community because they know who butters their bread and they're trying to convince us that their existence is a good thing. Today, Apple should show us that it cares for those who butters it bread, by unhiding the power inherent in the Nehalem line. Well, I must've stepped back on the soapbox while I wasn't watching my steps carefully. My bad. ;)
I understand your point here, and am a fan of OC'ing systems. But traditionally, it's not a part of professional systems, workstation, or server. It comes down to the affect it potentially has on reliability and stability, and the potential of down time & inaccurate results in systems for corporate users, and for the vendors (i.e. Apple, Dell, HP,...) the warranty claims that they'd think would increase.

Most of it is related to the use of inadequate cooling, increased power consumption (corporations do watch thier electricity bills with keen interest), and in the case of ECC memory, that functionality sacrifices a little speed for accuracy/stability.

On the whole, OC'ed professional systems are seen as a bad thing, so it's not being done. Apple like the other system vendors, are following this train of thought on the situation.

Personally, I've seen it done, and do so myself, but caution must be employed while doing so. The CPU's are far better equiped to be OC'ed (and designed to be so from the beginning). Adequate cooling is the first hurdle, as it can kill the CPU's. The OC also applies to the memory, so that means it can scale with the CPU's overclock, preserving the ratios (assuming the hardware is the same, and only an OC is applied). This seems particularly true with Nehalem architecture, as it's tied to the BCLK as well.

That leaves stability, but with proper punishment ... err... testing, it can be done. :p

As per the RAID comments, I was thinking you were looking to further improve your current system(s) for your specific purposes. ;)
 
Thanks nanofrog. Here are the details. I welcome any suggestions as I'm always open to receive the benefit of other's knowledge. The configuration for the two individual Windows/Hackintosh rendernode units I built, is set forth in the image file associated with my opening post, above. My modified 8-core 3.2 Ghz Nehalem Xeon system ( built before 3.33 Ghz chips were released) has 32 gigs of ram, one internal 7200 rpm drive for apps, booting, and BootCamp, and for digital media a 7 drive array - three internal 7200 rpm drives stripped as Raid 1 with an external Raid unit containing 4 drives. My DVD slots are filed with two BluRay read/write drives - HL-DT-ST-BD-RE-GGW's.

Two of my PCI-e slots have flashed PowerColor AX4890 1GBD5-UH's overclocked to 975/core and 1100/memory. These two cards are crossfired in BootCamp and occupy three slots. They're powered by the FSP Group Booster X5 450W Independent/Supplementary SLI Certified CrossFire Ready Active PFC Dedicated Multi.GPU Power Supply ( http://www.newegg.com/Product/Product.aspx?Item=N82E16817104054 ) which sets on top of my computer.

The fourth slot is occupied by an 8 lane/4 port sata card connected to 4 external 7200 rpm drives stripped as Raid 1 with 3 of the internal drives.

If it's not obvious, I edit video and create 2/3d animations, as well as write script in javascript, lingo, authorware script, actionscript, html and xml and code in Basic, C, C++ and Java.

I understand the tradition of not overclocking certain for-profit-used systems, but the times, the economy, the environment and last, but not leastly, the chips have changed. No earlier chips have given us the overclockability of the Nehalems and I submit that it can be done safely within limits and who better than Apple to develop a widget to do just that and set itself apart from tradition. Somewhere, I heard the saying, "Think Different." and I believed it then and still believe it now. Or was it just market speak?
OK, I think you can better use the drives you have, but I need some additional info.

How are the external drives set up, and how are they connected to the computer (card/interface type)?

You're in fact using a single drive for OS X, applications, and Windows via Boot Camp?

I'll need the model & ideally, specs of the card you list in slot 4 (SATA card), and the drives used (each model, and how many; looking for identical models & specs). Enterprise would be better, but I'll assume for the moment they're consumer units.

And yes, I agree the CPU's have changed enough where OC'ing is possible, and a solution for certain situations, such as yours. Faster clocked parts are too expensive, and I've been seeing OC'ed servers more often lately, particularly of AMD parts (boards are easier to find I think). (A lot of it has to do with finding OC server boards, though a few do exist, but so far, not for the Nehalem chips. They tend to come out say 9 months later than the intial boards).
 
8-core Mac Computer 1 :This is my eight terabyte 2009 3.2 Ghz Mac Pro system. All eight hard drives are Western Digital Caviar Black WD1001FALS 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drives ( better described at http://www.newegg.com/Product/Product.aspx?Item=N82E16822136284 ). I am using one of the eight drives for OS X, applications, and Windows via Boot Camp. I could not install BootCamp on the raid. I use the raid for storing video, animation and music files.

The card in slot four is a SANS DIGITAL HA-DAT-4ESPCIE PCI-Express x8 SATA II Controller Card (better described at http://www.newegg.com/Product/Product.aspx?Item=N82E16816111069 ). Four of the eight Western Digital Caviar Black external drives are housed in a mobilesilver 4 hot-swappable 3.5" external raid storage w/ esata interface (better described at http://www.mwave.com/mwave/skusearch_v3.asp?scriteria=BA23300 ). The SANS DIGITAL MS4T is connected by four 18 inch esata cables to the SANS DIGITAL HA-DAT-4ESPCIE. The four internal Western Digital Caviar Black drives are in stock trays 1-4.

8 core Mac Computer 2: I have a second Mac Pro - a 2007 8-core Mac with 16 gigs of ram ( a ten terabyte system) with two SANS DIGITAL HA-DAT-4ESPCIE cards in two 8x slots that has essentially the same disk configuration [ except it has two more , or six total, internal Western Digital Caviar Black WD1001FALS 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drives - 5 of which are a part of this system's nine drive Western Digital Caviar Black raid] using one of its two SANS DIGITAL HA-DAT-4ESPCIE cards to connect the other four drives ( The second SANS DIGITAL HA-DAT-4ESPCIE card is used to connect the external drive from computer 1 for backup. ) The two extra Western Digital Caviar Black drives are housed in special tray under the DVD drive and are connected to the system through the 2 extra sata connectors on the motherboard, i.e., all six internal drives are connected throught the system board. This machine does backups and is a render node. By the way, the 2007 8-core Mac Pro's can be overclocked a little ( see http://www.engadget.com/2008/06/29/been-itching-to-overclock-your-mac-pro-no-problem/ ). Bad side effect: I just have to reboot to reset the time clock which runs fast when overclocking. Slot one (8x) has a flashed PowerColor AX4890 1GBD5-UH overclocked to 975/core and 1100/memory. A 1x slot a Black Magic Intensity Pro card to capture high def video.

8 core Mac Computers 3 and 4: Two other similarly configured 2007 8-core Mac Pro's act as level 3 and 4 backups and render nodes.

I have other systems such as updated G5's, G4's, G3's, PowerPC 801's, 2/4-core AMD's, Dual Xeon Dell 630's that act as additional render nodes and a mixed assortment of 8086's, 80286's, 80386's and 80486's systems. Additional video production is done on vintage, but modified and updated Commodore Amiga 4000's and 2000's with Video Toasters (running Lightwave) and an Atari TT030 running the original Sound/Pro Tools. Music production is done on Roland D50 and S550; Korg M1, M1R and WaveStation; Kurzweil 1000 and 2000; Yamaha DX7 and TX802; and Moog Mini and Micro; along with modern stuff running in Logic Studio Pro and accoustics on real sax, piano, flute, guitar, harmonica and trumpet. All sound is channeled through DSP128's, Korg A3's, Uptown Flash mixers, a Roland 150 mixer and then through Delta 1010's into a quad 2.5 Ghz G5. Oh yes! I forgot the Tandy 100 and 1000, the Apples with the cheezy Motorola 8800 chips - all sold to me with puffery like - " This computer is so powerful with its 4.7 Mhz processor and 640k of ram and tape backup, you'll never need another one." Yeah, that last clause was right on. Luckily, I saved them all and they still work along side my Syquest and Bernoulli drives. And don't forget the HeathKit H-8 computer kit that I purchased in 1978 to build my first one. I know - TMI. I had a senior moment that lasted about 60 seconds, i.e., it was a flashback where I entered a over lit tunnel where my whole digital life passed before me in one minute - radical. So beware. That's what asking me about computer specs does these days.
:cool:

What about backups. Are any of the WD Blacks needed for that purpose?

I'm formulating ideas here... :D
 
As indicated above, I have three 2007 8-core Mac Pros each with about 10 terabytes of WD Black raid used only for backing up the 2009 8-core Mac Pro. When these 2007 Mac Pros are used as render nodes they render to the 8 terabytes of WD Blacks on the 2009 8-core Mac Pro. All of these systems, except the extreme legacy ones, are on a star configured 64 port 1000 Base T (gigabit) network.
It was lots of info, tightly packed, and I wasn't sure I'd understood thoroughly. :)

Here's a possibility (uses what you have):
Use the 4x internal Blacks for a stripe set (RAID0) for OS X & apps.
Set up 3x of the external Blacks in another stripe set for data. Use the last external for Windows (eliminates the need for a BC partition, and it wouldn't work off a software built array anyway).
 
Thanks nanofrog for your suggestions. Intriguing. I did not know that I could install Windows on a Mac without needing a BC partition or some sort of virtualization software. How do I do it?

1. Get the AHCI drivers off of Intel's site (here, and get the Floppy version <unzip only, as they're already extracted>), and load to a USB stick
2. Stuff the Windows install disk in the drive, and restart the system.
3. When it shows the window that allows you to select an installation location, Select Install Driver.
4. Load the drivers from the USB stick
5. Hit REFRESH
6. Format (Quick)
7. Allow it to finish installation
8. Load the BC disk, and load the Windows drivers (specific to the MP's board), and you may have to dig through the folders to find the right ones, but they're there somewhere ;)

Done. :)
 
Can someone explain to a relatively non-technical person such as myself why Macs are so difficult to overclock? Isn't most of the PC overclock-ability due to settings in the BIOS of those motherboards? If that's the case, is there any reason why we can't dig into the Mac's EFI firmware and find those settings as well?

I'm not very experienced with these things, but it seems to me that the opportunity should be there in the Mac firmware to change bus frequencies, etc., but that nobody with technical knowledge on that side of things has bothered to figure it out. I'm sure there is someone out there who knows how to do this, or someone who would be willing to pay an Apple engineer on the side to show us how to do this.

EDIT - I'm willing to contribute $20 towards a bounty to anyone who can figure this out.
 
Apple doesn't expose any EFI settings. This is not uncommon. Many laptops and desktops for Windows also don't expose much in the way of BIOS settings. Only enthusiast motherboards with overclocking in mind, expose these kinds of settings to users.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.