Oh, I was wanting to use SSDs internally and just get a PCIe card for 10GbE (the internal NICs in the Mac Pro really should be 10GbE but they're not).
So that's not possible?
Pragmatically no. Let's start with a diagram on the linked page below. ( the Mac Pro would have either a X58 or 5500 but makes no difference they play the same role with respect to the DMI bus interface. )
http://en.wikipedia.org/wiki/Intel_X58
The internal SATA controller is on the other side of the DMI (2GB/s) bus from any 10GbE PCI-e card you are going to get. The PCI-e card will be hooked to the x58/5500 chip. So they are separated by a 2Gbps link.
While each SATA link to the controller is 3Gbs there are no guarantees that the controller's link to the rest of the computer runs that fast. It never does (SATA isn't set up so that all of the links are active all the time). Typically they cap out around 2-3 links.
There is several of different sources of I/O for the shared DMI link : PCI-e v1 (firewire on Mac Pro) , 1GbE links, USB , etc. . It would be bad to let one of them oversaturate the link.
Any recommendations for PCIe cards for 10GbE and for using SSDs such that you could saturate a 10GbE connection?
This seems like yet another set up for selling PCI-e RAID cards in this forum, but ....
You have better chance if use two SATA controllers.
Two SSDs on the internal controller. And two SSDs in an external box on a 4x PCI-e v2.0 controller ( in one of the 4x slots ). It need not be a "proper RAID card". RAID-0 just means getting the data off the spindles. It isn't that hard nor that much overhead. The card needs to be just fast enough to pass through what the second SATA controller is pulling. Especially, since you're about to turn around and pump the data right back out the TCP/IP stack. The 10GbE card would go into the second 16x slot.
If you just want to blow off DMI link all together then a SATA II 4x PCI-e 2.0 RAID controller. The problem with the "real RAID" cards commonly thrown around here is that they are 8x and you don't have a 8x slot to put it into. ( you can go through gyrations to hook the cards to the internal drive sleds. )
If you find a 10GbE card that will work in a 4x slot, you may choke off the 10GbE throughput, but that's more likely because not getting data to/from the card.
You need a bit more than than the raw PCI-e link speed because there are various systems with overhead along the way from disk to os to tcp/ip stack to wire (and back. )