There's no risk to using SSDs. They've been around for 2 years and are all in the 2nd or 3rd generation or more by now. The issues with performance degradation with use have all been dealt with (the Intel's never suffered from this in the first place).
For most, this is true, as they're consumers (i.e. OS/applications disk), where the usage patter are by far reads, not writes.
But in the case of a high write environment, SSD's fall short, especially the MLC based units, which is the type of Flash used in the consumer SSD's.
Yes, SLC based SSD's do exist, such as the Intel X25-E, but it's noticeably more expensive than MLC, and is reflected in the prices. SLC also more commonly found on Flash cards, and aimed at the enterprise market (much higher throughputs this way given the increased bandwidth available on bonded PCIe lanes).
Correct me if I'm wrong...
It still holds true right now. It's not really going to improve in terms of write cycles until newer Flash technology is used (FeRAM for example), or in the interim, if the capacity is extended considerably (i.e. adequate "free space" beyond what the user will fill for wear leveling). But we'd be talking a 1TB minimum here IMO (and it has to be as affordable; say what a 1TB Caviar Black went for during the initial release), and larger wouldn't be out of the realm of reason for some either (i.e. single drive only = SSD).
OSX appears to have trim support in 10.6.4 though it may not be active yet or may support a limited number of drives.
Of course it could just be a place mark in system profiler.
This is what I expect is the case ATM, though it will come at some point.
WTF do you mean unproven technology?
In terms of
reliability, not performance. SSD hasn't been out long enough in it's current form to have long term real world data (what's been simulated in the lab for specifications is NOT equivalent to real world, as the lab disks were empty).
Where users are concerned, most have information stored on the disk, which means there's less free space for wear leveling. In primarily read usage, this won't matter, so long as there were sufficient good cells when the data was written (meaning it wasn't corrupted during the write process).
But in high write scenarios, the lower capacity is "rotated" through faster, increasing the wear pattern on those cells. Some companies like Intel, hide some unused capacity (~10%) on the disk (i.e. 80GB has an unusable/unseen 8GB of additional capacity). Most however, do not do this. And even 10% is low, with existing information showing 20% is more realistic (and this is for consumer use BTW).
Why this is a problem, is that MLC only has a rating of 1E4 writes (minimum) before it's dead. SLC is a bit better, as it's rated at 1E5 writes. Wear leveling is how this is increased.
But another trick they used to get the specifications, is they just tossed out the lowest 10% of all the cells to improve the statistics. Then combine this with the fact the disk/s used were empty, the numbers are higher than what you'll see in the real world. In simple terms, they perverted the acceptable practice of tossing outliers from the results, as 10% of all the initial data set is more than outliers.
This has become more of a common practice lately, and not just in a single industry. You might recall some of the prescription medications that have been pulled by the FDA in recent years.
Hopefully, you're starting to get the picture.
