So, if I were to buy a PCIe 2.0 SATA 6Gb/s card and install in, and then an external case/drive that also runs SATA 6Gb/s, can my 2008 MacPro handle the full 6Gb/s? So I would actually get faster speeds outside of the box than inside? I know if I were to plug a 6Gb/s drive internal I'd still be stuck at 3, but this external connects via PCIe 2.0 which I guess can handle that throughput? Am I wrong? http://eshop.macsales.com/item/Newer%20Technology/MXPCIE6GS2/
In response to your question You are correct.. it will only work with external drives in non-port multiplier configurations. If you want to use internal sata 6GB sata drives, you will need to get a sata card which supports internal drives.
Hello, You'd certainly get a higher theoretical bandwidth. But as far as I know, there are no mechanical hard drives that saturate the 3Gb/s bandwidth, even the ones branded 6Gb/s, so going to 6Gb/s won't help with real world speeds. It's not because you ride your bicycle on a highway that you'll automatically reach 75mph with it. The fastest SSDs will take advantage of the extra bandwidth, and most likely a striped RAID set of regular hard drives. A single mechanical hard drive? I don't think so. Loa
Yeah, I figured that as much on the actual HDD speed, they're pushing these throughputs up for SSD but marketing on disc speed... they've always got to be sneaky. Well, I mean if you RAID it up pretty intense you may hit that. I've got a software striped RAID w/2 discs and hit about 150-180. Thanks guys! I had no idea that PCIe etc interfaces were that much faster than SATA.
And a substantially large sticker shock to go with them too (those that reach or exceed 1GB/s reads). *Warning, you may want to sit down first...* Fusion ioDrive Duo 320GB Fusion ioDrive Duo 640GB
Lol don't use Dell Only 4199$ for 1TB OCZ with read and write of 1.4GB/s http://www.newegg.com/Product/Product.aspx?Item=N82E16820227517
There's not many vendors that carry the Fusion ioDrive Duo's, and Dell was the only site I saw that actually listed any prices (just a few system vendors such as Dell and HP). CDW is always listed as "Call". The point being though, they're quite expensive.
In order to get higher speed, the system must support PCIe V2.0 (5.0Gb bandwidth) and I dont think MACPro 2008 has that. You can get up to 256MB/sec if the external raid0 or raid5 (hardware raid). Although the system PCI express is V1.0 (2.5Gb bandwidth) there is definite gain speed, my test with our five drive tray-less hardware raid5 box eBOX-R5, SATA 6.0Gb card, eS3_PCIE21 gives us another 20MB/sec extra, which is good for our HD editing
Having owned a 2008, I can say... There are TWO PCIE 2.0 slots and two 1.1 slots - the first two are 2.0 and the other two are 1.1 - I know as I had a 3.00 ghz 2008 harpertown.
PCI-e bandwidth comes from how many lanes you lash together. In v 1.0 each lane is 250Mb/s and in 2.0 it is 500Mb/s. There are some cards that are 16x so could get 4Gb and 8Gb respectively out of them. However, usually folks have socketed a 16x video card into a Mac Pro. So not much of a choice where you can put it. (normally there is a free one. But if you have stuffed lots of cards into the box, you may have to choose what is more important.) The other 4x slots are 1Gb/s and 4Gb/s respectively. ( a 4x v2.0 is equal to a 16x v 1.0 in terms of bandwidth but not in electrical connections. ) Mac Pro 2008 has PCI 2.0 . (http://support.apple.com/kb/SP11) You'd need an open 16x slot. Need slightly more than 6Gb/s because PCI has its own protocol overhead that must be layered on top of the the SATA one.
Granted that affordable drive speeds aren't even near close, why are we even bothering with SATA if PCI(x) can totally outperform it on bandwidth? And those Duo drives... wow. Just plain wow. Setup a striped RAID of those things, you really could check tomorrow's email today. Of course, in 5-10 years, those speeds will be far surpassed. Imagine what it'll be like to dump GB's of data, hundreds of GB's of data in mere seconds.
PCIe lanes and slot configurations are limited though (chipset and available physical space due to board size), so having other interfaces available such as SATA, allows more perhipheral devices the system without competing for the same bandwidth (separate controllers in the chipset, though at some point, the chipset to CPU interface will carry all of the data; QPI in the LGA1366 architecture, and will carry on for awhile into others for the same segment). HDD's (mechanical) can't saturate SATA 3.0Gb/s right now (good to ~270MB/s real world). SSD is basically there already, so SATA 6.0GB/s is very attractive, as it's going to allow SSD speeds to get faster without being forced to a PCIe Flash card. Flash cards are expensive, and consume a slot that could be better used for something else, such as a 2nd graphics card (i.e. GPGPU processing). But if more throughput is needed than SATA can supply for storage, then PCIe will help you out (just be prepared to shell out more $$$ than an SSD).
It's rather easy to tap out a b rather than a B on a keyboard though, and has happened more than a few times by various MR members in the past. I've even done it (i.e. hit Submit before noticing it, if at all, when typing hurriedly). And I'd be willing to bet it will continue to happen in the future.
Sorry got mixed up with your 'b' in the original post. It should be PCI 250MB/s per lane in v1 and 500MB/s per lane in v2 in actual data transfer bandwidth. Had pulled that from wikipedia on quick look up, but it has pointer here: http://www.interfacebus.com/Design_Connector_PCI_Express.html [which has slighlty different number on PCI v2 not sure is quite right. But doesn't go into the protocol version calculation. ] The point was to me bandwidth means what you actually get out. Not the "transfers" numbers. At the end of the day you want the bits off the disk. There is a protocol overhead. The lanes are composed of a couple of wires, so the transfers on a single wire doesn't really get to the point of bandwidth either. That aligned with language the standards folks use: http://www.pcisig.com/specifications/pciexpress/base2/#b21 P.S. SATA / eSATA is limited to just one set of pairs. While it is point to point usually at some part along the path those points merge into bridge/concentrator/etc. If that bridge only has SATA going out too then effectively have capped out the bandwidth ( putting aside overlaps requests if get fancy). PCI-e has standard mechanisms, lanes , so that can boost bandwidth buy lashing these pairs up together so that can aggregate more data and push it through without hitting that bottleneck. So even though each PCI-e v1 lane is slower than SATA 3.0 they can gang up and in aggregate push through just approximately as much data.