I think the value of the custom motherboard and case setup in the Mac Pro is under valued by most people here.
I do like the clean internals provided by enterprise systems, but Apple's not the only one out there, and not the best either IMO. Especially in terms of speed for replacement parts, even fans.
Having built a number of custom PC's over the years, things like eliminating cables with backplane SATA connectors, integrated SATA runs to a PCIe slot for a RAID card (albeit a poor one), integrated PCIe power for a graphics card, motherboard wifi and bluetooth module integration, a two piece motherboard with CPU's on a removable daughter card, sufficient and silent air cooling, just to name a few, are fantastic design elements that people here seem to overlook. I guess only a hard-core PC modder can appreciate these kinds of things.
Meh. The PCIe PCB lanes for data transfer though a nice idea, actually makes things more difficult, and expensive for the end user for 3rd party cards. As you say, their own card is a junk, and covered before. The need for adapters to run an internal array (HDD bays) to a 3rd party card is lousy (adds $165USD from the start), and a single SFF-8087 cable isn't that big a deal IMO. I'll take interoperability any day. No loss of on board SATA ports that way either.
The two piece board was a factor of making it fit in the case, not cooling or convenience to the user (convenience to assembly was likely highly appreciated though).
But as mentioned, there's other system vendors who put in the effort to make the system's internals clean, cool, and easy to replace parts. And for less money (at least the CPU's used in the base models)!
But they have an additional advantage: They didn't cripple their systems with things like fixed memory multipliers, inability to access firmware, or sacrifice interoperability with 3rd party add-ons. This adds value IMO.
Fact is that Apple firmware and system engineering makes standard software like Adobe run slower on standard CPUs than on workstations by other brands. Apple cripples the hardware by morose logic board design and primitive firmware. Compared to best practise 20-60% of the hardware capability is not accessible to the customer. 20-30% is before pushing just by using the RAM speed and RAM capacity appropriate for the Intel design.
I can understand the concept of locking their OS to their systems, and using the firmware as the vehicle to do it. But that didn't mean they couldn't give users access to the firmware, or go with a fixed multiplier in the case of the '09's.
Both are a major disservice to users.
gugucom: i dont think that you would WANT to overclock a server processor much.. and my 8core with 1600mhz FSB (that almost no workstation had when i bought it) flies!
you do have overclocking tools, but overclocking usually goes hand in hand with instability.
In the case of the Nehalem's, it can make more sense though. Intel designed a chip that can OC to quite an extent, so it's not inconceivable to be able to give a modest OC without experiencing system instability.
With budgets getting tighter, and the fact the Nehalem architecture is so easily done, that old adage may be reconsidered in the near future, as the funds for additional systems just won't be there. Limited budgets are changing how people buy systems, and I see this as a particular advantage for SOHO and SMB users (i.e. small production houses, engineering,...).
I've taken this route, and even run it with a couple of RAID cards. No problems, and it's stable at 4.12GHz on air (i7-920). Rather amazing. The performance benefits are substantial as well, and it's quite an attractive option.
For server use, overclocking voids warranties, and warranties are what honchos like.
For large corporations that can put aside the funds, yes.
But it's changing in the current economy for SOHO and SMB from what I can tell. Budgets have gotten too tight, and the ability to OC make it attractive to get the computing power they need within what they can afford.
However, I'm sure i7 over clockers moving from 2.66 to 4 GHz can certainly postulate that their systems are stable.
If done properly, it's quite doable. But it does take proper testing to do so. At least for me, as I'm cautious before trusting critical data operations to it. Corrupt files and BSOD's aren't an option for me.
Yeah, big deal... as I pointed out above, you're missing out on 2-5%. You make it sound like with out this, a Mac Pro isn't worth it. Bizarre.
For many, this is likely the case (usage pattern). But it doesn't take into account that there are applications that can utilize it now, that software will be re-written to do so, or the psychological effect of the purchaser to discover it's hindered (even if they'll never use it).
There wasn't a real need to cripple it, as the IMC was built to do this for certain processors. Worst case, Apple could have just stuffed 1066 in those systems (to save costs on their end), and allow 1333 operation for those with capable systems if they added the correct memory on their own.
The reason you don't see it in many real life apps is the huge bandwidth designed into the Nehalem machines. So memory bandwidth is very seldom the bottleneck it used to be on 2008 machines. Nevertheless castrating RAM by 20% is not something that makes me happy.
Anand's tests show that high performance games and rendering apps are bottle necked by 1066 C7 memory compared to 1333 C7. With the development of better software more apps will benefit from memory actually running at the design speed of the memory controller. BTW, typical HTC applications like computational fluid dynamics, weather simulation, crash testing or virtual reality software did not feature in the tests.
They can't cover everything, and aim the tests at a common, general use pattern. Such heavy applications just aren't covered, but as you say, it's applicable (even if for a small amount of users), given the fact MP's are workstations.
There are two other points that would make the application of the conclusions questionable.
1. Tests with desktop CPUs may not accurately translate to Xeon CPUs
2. 1333 memory is on the same price level as 1066, at least when I bought it last week in Munich
1. It's close enough though. Same architecture (ECC being the only difference between i7-9xx and the W35xx respective of clock speeds). The only other factor that's different is the voltages and associated TDP. But I don't see these as a real issue at all, so long as the VR's can produce the power and the CPU's are adequately cooled. This is possible in PC boards and even the SP MP.
2. True, but to a lesser extent. Apple sets their prices when the system first ships, and doesn't take lower prices into account in the MSRP over time. Utlimately, vendors always charge more for add-ons, so most who are budget conscious will go 3rd party anyway.