Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Many motherboard manufacturers have better wiring for PCIe lanes, Apple is just being sloppy. I am not talking about Dell or HP or the likes. They are even worse than Apple. Take Asus for example. They do a nice job on motherboards, so does Gigabyte. No need for switches.

Chuckle.....

ASUS Z8PE-D12X board
http://www.asus.com/product.aspx?P_ID=gGozRAk0YWQCQtSA

"...
Total PCI/PCI-X/PCI-E Slots: 6
Slot Location 1: 1 * PCI-X 100/133 MHz
Slot Location 2: 1 * PCI-X 100/133 MHz
Slot Location 3: 1 * PCI-E x16 (Gen2 x8 Link)
Slot Location 4: 1 * PCI-E x16 (Gen2 x8 Link)
Slot Location 5: 1 * PCI-E x16 (Gen2 x8 Link)
Slot Location 6: 1 * PCI-E x16 (Gen2 x16 Link) (Auto switch to x8 Link if slot 5 is occupied)
Slot Location 7: 1 * MIO Slot for Audio card (PCI-E x1 is not supported)
..."

Ooooooo looky-looky ..... a switch.


Toss in the standard Apple design constraint not to add "old legacy" tech to newest models ( so loose PCI-X slots ..... only going to have old legacy cards with old legacy drivers anyway) and you have exactly 4 PCI-e slots. As I said, all the other vendors are operating with the same 36x lane constraint. It is a simple matter of doing some straightforward arithmetic to also see that they are also using switches if their PCI-e slot count is substantially higher and/or superficial bandwidth numbers are higher. Apple is just not perpetrating more bandwidth than is actually in the box.

If you want a workstation with a 16x graphics card and a 8x 10GB SAN card that you want to run full blast then the Mac Pro is a better design tradeoff than that Asus board. If you want to add 3 8x cards to your box to hook to gobs of very fast direct attached storage (DAS) quickly then the Asus board is better. One isn't necessarily better than the other. Depends upon what doing. In the Mac Pro market I suspect there are going to be far more folks using two 16x cards and 1 16x and 1 8x cards than those that need 3 8x's .
It is a reasonable design tradeoff.



The ICH10R has more than x1 PCIe lanes available, in fact it has 6 x1 lanes, of which you can configure them into x4 and x1 x1 configurations. Also, allowing 1GB/s of data thru the x4 is better than having to share it thru some latency ridden switch.

I just didn't explicitly quote a long list of consumers because I apparently suffered from delusions that folks would go take hard look at the specs.

Typically the PCI-X and PCI slots are consumers of that 6. Firewire is. USB 3.0 is. An extra onboard SATA/RAID controller often is (or increases the switch usage in the "upper" x36 lanes *** ) , etc. If you look for any "value added" feature on the motherboard that is not part of the core chipset, then a large percentage of the time it is a consumer of some number of that 6.


Sure, Apple probably has 2-3 lanes not hooked up. However, there is also no clear quantitative evidence that there isn't a bottleneck in the overall PCI-e switch. The chipsets are not generally designed to run everything on all possible channels at full blast.

The Mac Pro design doesn't lend itself to carrying forward "old" cards. It lacks in backwards flexibility. However, unless you using a single Mac Pro to drive a multimillion visual simultor or some large DBMS workload (with high DAS bandwidth requirements ), it fits a broad spectrum of workstation users.

*** "upper" relative to this diagram http://en.wikipedia.org/wiki/File:X58_Block_Diagram.png
 
The problem with the Mac Pro case is that it is poorly designed from the outset. There is something to be said of full/medium full sized ATX tower designs.

If Apple re-designed the case to be AT/ATX compliant there would be more space available for expansion cards and HDD bays. They could still keep the beautiful design and elegance the Mac Pro case has, in particular the backplane, but appeal to the market that needs further expansion.
There's (in my opinion) absolutely no need to have the CPU's & RAM on a breakout board. I also believe the design would work better for airflow.
 
Where did this value come from?

Power specs on Apple's website says:

Current: Maximum of 12A (low-voltage range) or 6A (high-voltage range)

Line voltage: 100-120V AC or 200-240V AC (wide-range power supply input voltage)

200V x 6A = 1.2kW

That's where I got my value from :p

So I was correct in one way!

At 80% efficiency thats 960W max draw.

Either way, that's more than enough. You can run multiple GPUs and god knows what with that, I know, I did with a PSU half that size :)

The PSU of the Mac Pro isn't a limiting factor ;)

Two CPUs of 130W: 260W
Two GPUS of 220W: 440W
Motherboard: 100W
Memory (sticks): 50W
Hard-disks (4): 40W
Everything else: 50W

= 940W and you will NEVER, EVER load EVERYTHING to their utter max, so again, the PSU of the Mac Pro is MORE than sufficient :)

I encountered people over-provisioning the PSU in my overclocking days, the 1.2kW PSUs out now are simply for tri-GPUs.

Problem is people who think they know best when actually they know absolutely nothing on the subject :D Edit: Not you nano-frog :)
 
Power specs on Apple's website says:

Current: Maximum of 12A (low-voltage range) or 6A (high-voltage range)

Line voltage: 100-120V AC or 200-240V AC (wide-range power supply input voltage)

200V x 6A = 1.2kW
This where I thought it may have come from. ;)

But no real PSU on the planet is 100% efficient. :eek: :p
 
I encountered people over-provisioning the PSU in my overclocking days, the 1.2kW PSUs out now are simply for tri-GPUs.

Problem is people who think they know best when actually they know absolutely nothing on the subject :D Edit: Not you nano-frog :)

Only if you consider a Fermi GPu will 1.2kW of power be just enough for a dual GPU setup.

Chuckle.....

ASUS Z8PE-D12X board

Of course, choosing that one, but we know Apple doesn't go with that type of motherboard. Come now. That has way too many features and memory slots for a fair Mac Pro motherboard comparison.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813131389

That one is more like it.


....Typically the PCI-X and PCI slots are consumers of that 6. Firewire is. USB 3.0 is.

Now, now... not everyone uses those for Firewire of USB 3.0. Those x1 lanes are too slow for Firewire 400, not to mention USB 3.0 (recall each x1 lane from the ICH10R is a PCIe 1.x which means a bandwidth of 250MB/s per lane).

However, there are some manufacturers (like an article on Anandtech once said), that go the grouped PCIe route. In which they bundle x4 PCIe 1.x lanes into a single x1 PCIe thru a separate controller and then feed that new x1 lane into a USB 3.0 or SATA 3.0 controller to allocate the right amounts of bandwidth (in this case a x1 link with 1GB/s bandwidth). However, they [Anandtech] found this method to be worse than an add-in card, not necessarily because of bandwidth issues, but because of the horrible latency issues.

An extra onboard SATA/RAID controller often is (or increases the switch usage in the "upper" x36 lanes *** ) , etc. If you look for any "value added" feature on the motherboard that is not part of the core chipset, then a large percentage of the time it is a consumer of some number of that 6.

There is no need for an onboard RAID controller when there is one already built in the ICH10R; otherwise, it'd be the ICH10 chip. However, as we all well know, built-in rarely means performance, so we have the add-in RAID cards. So that still means those x6 lanes are still free on Apple's Mac Pro. Remember, Apple isn't known for these "value-added" features; more reason to sell an Apple branded RAID card.

Sure, Apple probably has 2-3 lanes not hooked up. However, there is also no clear quantitative evidence that there isn't a bottleneck in the overall PCI-e switch. The chipsets are not generally designed to run everything on all possible channels at full blast.

The Mac Pro design doesn't lend itself to carrying forward "old" cards. It lacks in backwards flexibility. However, unless you using a single Mac Pro to drive a multimillion visual simultor or some large DBMS workload (with high DAS bandwidth requirements ), it fits a broad spectrum of workstation users.

*** "upper" relative to this diagram http://en.wikipedia.org/wiki/File:X58_Block_Diagram.png

Apple has all x6 lanes on the ICH10R bridge open, they have to because of that switch using the same x4 from the X58 chip.

As per bottlenecking, I have no idea if the x4 lanes are bottlenecking. One thing is true however, it doesn't hurt Apple to use those x4 lanes from the Southbridge and make ends meet for one peripheral.

Agreed on the legacy and "old" cards & tech comment.
 
Apple has all x6 lanes on the ICH10R bridge open, they have to because of that switch using the same x4 from the X58 chip.
As they don't offer RAID 5 under Disk Utility, the ICH10 would be sufficient (it can still run 0/1/10 in AHCI mode).

Looking at the backplane board, one of the part numbersd I've been given from someone else confirmed it's the ICH10, not ICH10R (I am presuming the numbers were read off correctly, as I just got typed info; pics of the boards sent were useless). As it happens, it also allows them to save $5 per system (ICH10 = $14, ICH10R = $19 according to ark.intel), which can be put towards the cost of the PCIe Switch used for Slots 3 & 4. :eek: :p
 
As they don't offer RAID 5 under Disk Utility, the ICH10 would be sufficient (it can still run 0/1/10 in AHCI mode).

Looking at the backplane board, one of the part numbersd I've been given from someone else confirmed it's the ICH10, not ICH10R (I am presuming the numbers were read off correctly, as I just got typed info; pics of the boards sent were useless). As it happens, it also allows them to save $5 per system (ICH10 = $14, ICH10R = $19 according to ark.intel), which can be put towards the cost of the PCIe Switch used for Slots 3 & 4. :eek: :p

I believe even the cheapest $14 ICH10 has Intel Matrix Storage Technology (the one responsible for RAID)...

See here:
 

Attachments

  • Screen shot 2010-08-24 at 7.01.15 PM.png
    Screen shot 2010-08-24 at 7.01.15 PM.png
    167.8 KB · Views: 63
I wonder then, what is the difference.
I started to clarify, but didn't. :eek:

The ICH10 = 0/1/10 support with AHCI enabled (has the hardware support in the chip, but it needs AHCI compliant disks and a driver before it will function). If these conditions are met, then 0/1/10 are implemented via software (which is what Disk Utility does).

The ICH10R however, actually has a hardware RAID controller in it as well (ARM based), so it can handle the RAID functions for 0/1/10/5 on its own. But as Apple doesn't offer users access to the firmware (users can setup an array directly this way; even before an OS is ever installed), or by an application (which is certainly possible with EFI), the ICH10R is actually useless for this additional feature.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.