Typically you always have a TX (transmit) and an RX (receive) fiber. The reason there are two pairs of such cables is because SAN (e.g. Xserve RAID) when done right has two independent switching fabrics, typically referred to as Fabric X and Fabric Y or Fabric A and Fabric B. This means every host has a connection to two switches. Disk arrays have connections to the same two fabrics as well.
Just to clarify....
DMP (Dynamic Multi-Pathing) uses two fabrics, but both fabrics have the RX/TX connections. That's why you have 4 fibres - two more-or-less independent pairs of fibre connection to storage (people paranoid about availability wouldn't trust a dual-port HBA - they'd insist on two HBAs).
The goal is that if a fabric fails (due to a broken cable, a failed HBA (Host Bus Adapter - the Fibre Channel PCI (/-X/e) card), a failed switch or whatever) you can transmit and receive on the other fabric.
Ethernet over Cat5 also has separate transmit and receive pairs - but it seems like one wire. Fibre (and nobody has mentioned that the orange cables means that they're using MMF, not SMF) isn't condensed into a single connector/cable (unless you're using copper fibre).
This cluster is a bit amateurish - 750 TB spread across 40 boxes? Check out the 500 TB IBM disk system (http://www-03.ibm.com/systems/storage/disk/ds8000/index.html).
Instead of 40 boxes that add up to 750 TB, why not two boxes that are 1000 TB?
But of course, the audience is video editors, not storage architects.