Not enough storage capacity + mac pro = not possible.
It is possible with the system as it arrives (4x HDD bays, an empty optical bay, with perhaps 2x unused ODD_SATA ports <depending on the specific model>).
In the case of the '09's, it's even possible to exceed the ICH10R's throughput limit before filling all the HDD bays when using SSD's (i.e. 3x Intel SSD's = 750MB/s in a stripe set, but the ICH10R hits the wall ~660MB/s).
160gb is too small? Add another drive. All 4 slots used? Get an eSATA card. Don't want another card? Put the SSD is some empty space inside the case. Since an SSD doesn't vibrate or make heat, location is not very important.
Using cards helps, but even that has limits. (i.e. the max you can go in drive count is 3x SAS RAID cards that can run 256 disks each <ATTO's SAS cards + SAS expander enclosures>). And it will cost as much as a modest house in some areas.
Eventually, even in a fairly modest drive count, you will have to go externally with the drives. SSD or mechanical. For example, you could squeeze in up to 12x disks internally (and need card/s to interface them to the system), if you pulled all optical drive/s, and used 2x (4bay* 2.5" backplanes = 8x 2.5" disks) + 4x HDD bays (1:1 drive per port ratio). Less if it's 3.5" drives (8x max).
Many have budget restrictions, and $$$ can't be ignored.
Yes, but you'll still have 10x the seek time no matter how many drives you add. This is why Google uses RAM to store its entire active search database.
It will depend on the usage. In some cases, random access is the primary usage pattern, while others will spend most of the time doing sequential transfers (i.e. large file video/graphics work).
In the end, the specifics matter, and general statements can cause problems if the needs aren't so simple.
This is why SSDs are used for boot/application drives and a second HD is used for mas data storage.
This can work for some. Others, not so much or it's not the best use of the budget to fulfill the requirements.
Its known as a "nightly backup".
That's not what redundancy is.
Redundancy = System keeps running even after a drive/s fail (depending on the array level used). Say it with me,
RAID /= BACKUP. And the
R in RAID = Redundant.
A backup isn't capable of allowing the system to continue to run in such an event (i.e. OS goes, and the system stops working). A backup is another copy of a file to retain the data in case of a
total disaster on the primary disk/array (for when the **** hits the fan scenario). It's hoped that proper practice, equipment,... will prevent a total disaster from happening, but it still does. Statistically, the longer an array runs, it will eventually fail. It's not actually a question of IF, but WHEN. That's one of the biggest reasons for a MTBR (Mean Time Between Replacement) policy.
Think of servers which are not supposed to go down. Ever. Until the IT dept. EOL's the system (which is even after it's replacement is operational).

You can swap out drives, and the system can still run. It's done either for online expansion or as an MTBR cycle, though they usually are the same (planning for coincedence saves time, effort, strain on the system, and money).
False. RAID is still limited by each drive's 7ms seek time. Adding more drives cannot change physics.
It's not incorrect. Yes, each drive is fixed, but the speed is increased as a result of
parallelism (the files are distributed across the members, and are feeding simultaneously). That's why it's sped up as well. It's just not as significant as sequential, because of the fixed seek rates of each drive, and the heads must be moved far more often than large files (which does seriously improve the sequential rates).
False. That is an OS support issue, not a drive issue.
No it's not. It's a combination of multiple factors.
1. Drive firmware (many of such features have an aspect in the firmware, and there's no set of standards just yet in effect)
2. OS
3. Flash chips used (in terms of reliability)
What part of "they will live longer than the computer's useful life" is unreliable?
Those numbers are misleading. It appears that they'll last as long as they're listed, given the methodology used in the statistical analysis.
If it was done on 100% of the cells, it's no where near what's listed on the specification page. That 10% they toss out is that detrimental, and why they throw it out. Their only other choice is to bin the Flash chips prior to assembly. That generates waste from failed parts, and increases costs. Simply put, they took the cheaper route.
False. This is exactly why Intel SSDs are odd capacities, they have built in wear leveling capacity not accessible as storage space to the user.
In Intel's case, yes they included user unaccessible capacity to deal with the issue (set aside for wear leveling, and it's 10% of the drive's capacity, according to Intel). So for a 160GB drive, there's an additional 16GB that's hidden from the user. Now keep in mind, even 10% of that hidden amount will fail sooner than the specs too. That's were the additional capacity can help if the drive is used in a high write usage pattern.
For primarily reads, it's not really an issue comparitively speaking anyway. And currently, SSD's are aimed at the enthusiast user. Not the enterprise sector or mainstream consumer sector.
That's not the case with other vendors though, and the statement was written towards ALL SSD makes out there, not just Intel.
...thats what consumer drives are being marketed as...
The enthusiast market is a sub-section of the consumer market.