Correct. It is not backup. I personally use dual backup strategy. Locally backed up via Time Machine, and Cloud backed up via Crashplan+.
This is almost certainly easier than NAS, especially if one were to consider either of the free NAS systems that use ZFS (FreeNas and NexentaStor), because you're kinda on your own to actually build the system, install the software, and figure out a strategy.
I don't know how Crashplan stores data, if there's any checksumming to make sure what you download is the same as what you uploaded, should recovery be needed. That is something the ZFS based NASs do, and they also do remote replication. FreeNAS has a GUI (web browser) for managing this, I'm not sure NexentaStor does. Periodic (scheduled) snapshots act as version control.
It just appears as another drive on the computer... similar to plugging in a FW800 drive. I can keep my A3 library (or whatever I desire) on the Pegasus and just use it. It acts just like any other permanently attached drive.
OK it does this internally, as a feature of the array's controller. When I think of logical volume management, I think of Linux LVM. Core Storage is the same idea, although the tools presently are limited.
From the Pegasus manual: RAID level support: RAID 0, 1, 1E, 5, 6, and 10
I was looking under the Specifications tab and it doesn't list 1E or 6, but under the Models tab I now see RAID 6 also. So with dual parity there's no ambiguity potential like RAID 5, but still it's not the same thing as checksummed data. Unless there's a reason to suspect the data (disk read errors) the RAID software (or controller) doesn't use parity to confirm the data is valid. Parity is used to rebuild data should a disk drop out of the array.
Regarding JBOD: You first have the physical drives that you manage... and then a logical drive management panel which lets you combine various physical drives into logical drives. From there, you place various RAID levels and policies on the logical drives. I believe that if you were to map the logical drives onto physical drives 1:1, then you would have a JBOD.
Guess it depends on how it's implemented. I know the way Drobo does things, they explicitly say on their web site that ZFS isn't supported. Since it has volume management, and RAID capabilities integrated, usually underlying logical volume management isn't used. Same for btrfs (although I know it works fine on top of LVM).
I have NOT found that GbE and FW800 perform similarly, despite the fact that the BW is similar. FW800 (and TB) seems to be much better integrated into the file system. As you state, for large files are less of a problem, but they are still too slow for my taste.
There are all sorts of reasons why this might be. A faulty FW cable is unlikely to work at all, whereas a faulty ethernet cable might just get you poor performance. Most people don't tune their networks at all. With two MacbookPros, Cat5e and a basic linksys router running dd-wrt firmware, I get ~88MB/s which saturates the disk in the MBP with the slower disk. Cat6 cables do make a difference with GigE. So does a better router/switch.
With my A3 library on my Pegasus... I can pull up 10's of thousands of pics on my screen, and they keep up no matter how quickly I scroll through the thumbnails. I can scroll through so they are a blur, and the Pegasus keeps up. With a NAS... I am quickly looking at empty frames waiting for the NAS to populate the images.
It's not exactly an apples/apples comparison, 10Gbps vs 1Gbps, so this experience should be expected. How is it compared to 10 GigE? People who work on HD video collaboratively go this route because it's actually quite a bit faster for such workflows to share the storage rather than having to push/pull files to local fast arrays.
I would be delighted if Apple offered smart disk caching where the SSD/HDD appeared as a single volume, with the cold data automatically moved to the HDD when space was required in the SSD.
This is a bit frustrating because my understanding is we're using Intel motherboards, some of which support Intel Smart Response. While proprietary, it can do this at a hardware level. It would be nice if we could have access to this.
While I prefer open storage, I'd probably accept an Intel proprietary solution because if my motherboard dies, I'm certainly getting another Mac, to regain access to the data. For RAID, I shy away from proprietary solutions, where if/when the hardware dies, I've lost access to the entire array unless I buy a product using the same RAID implementation and possibly even down to the firmware version. Yes there are backups but it takes a while to rebuild from backup.
The trend in storage management is, data replication and self-healing, rather than depending on backups, just because it takes so long.