I don't care for whatever reason it came faster, my point is it CAME FASTER, so most of the prediction of the general public including yourself, is way off. Saying it wouldn't be possible to double and half the price soon, I'm telling you, IT JUST DID. I don't believe we need to wait for 3 years.
You seem to be under the impression "Well it happened this time, and therefore will continue to do so'. But you're forgetting a few things. The price drop was a result of the new fab process. That's not going to happen every year, as fabs are an expensive proposition, at about $3 Billion USD each. So something else has to happen for prices to continue to lower while the die sized remains fixed (34nm in this case). Production must go up, and for capacity to increase on a fixed die, the layers must be increased (additional cells). Without this fact, they can't lower their costs, and pass any savings on to the consumer via lower prices.
Supply issues must be addressed first. You seem oblivious to this fact.
Now assuming they do manage to get the production up quickly, and manage another price drop, it does have the effect of pulling in more purchases = increased market share.
But here's the rub. Let's say they manage the production quantity issue, but what about the increase in capacity per chip? If this doesn't happen quickly enough for the next model release, the cost per chip is still essentially the same. So you can't assume the general trend of 2x capacity @ 1/2 the cost, when part of this equation isn't met.
It's not that simple my friend. If that were the case, a typical SSD wouldn't even last a month. I suggest that you read anandtech's article, to clarify all your doubts. You're just basing its characteristic, but there are always ways for circumvention.
Those numbers are the result of device limitations, or simply put, physics. This remains the same despite wear leveling (it rotates the writes around to keep n writes of each cell similar to one another). It's a compromise solution to solve the issue with the device physics. A rather good one, but does depend on having other unused cells available (the compromise).
Wear leveling applied to a full drive, is only going to get you so far, as once the unused cells are used (the worst case cells 1E4 or 1E5 will go first), you're data will end up corrupted if any single cell fails.
Intel's numbers where based on a statistical analysis that DID NOT include ALL cells. Instead, they only used the 90th percentile, considering the worst case cells as outliers. By doing this, they get to dump the influence these cells would have on the outcome. Simply put, this is manipulated data to generate better specs than what the drive is truly capable.
Now you won't necessarly see this, unless you in fact do fill the drive to 100% capacity, and continue to write the hell out of it. For enterprise users, this is likely not an issue. But there are those interested in SSD now, that it would. There is a solution of course, but it means increased capacities, that sell for higher cost.
The individual does have to balance this out, but the specs don't indicate this, so some users that are unaware, could get burned. Research and a proper understanding of one's usage pattern needs to be well understood.
But to make a blanket statement that it's not an issue at all, is misinformation. The articles you've linked are basing their comments on the assumption (with high odds they're right), the user base (enthusiasts) that's reading their articles won't hit this situation, and Intel's manipulated data will be applicable. This may not always be the case, as the statistical minority (say professionals that have
very high write usage) may read the articles.
The drive has a 3 year warranty, I say in 3 years time we're seeing even cheaper and much faster SSD in the market. I'll just upgrade by then. The beauty of this technology my friend, is that SSD is aware when it will fail. So before it fails, it halts all the write commands. So basically it can guarantee your data to stay intact, even if you can't write on it anymore. You don't even need a backup.
You're already using SSD, so you've the option of using what you've got for the next 3 years.

I applaud your willingness to spend early, as it helps push the technological development. I'm also under the impression it's limitations won't affect you at all.
And I agree the ceasation of writes is a good idea, but that doesn't make your data safe.

Any such assuption is completely foolish. The fact is, things can happen. Take a PSU failure. If the SSD is attached to a PSU rail that hits "meltdown", say the SSD's power is hit with 120VAC, gues what? That drive and all data contained on it is GONE. If your data isn't critical, then fine. But if that's not the case, skipping on a backup of some sort isn't a smart idea. Despite any statistics on the probablilty of a failure, the reason for having it
is for those unusual circumstances, as the data is presumed far more valuable than the system on which it resides.
Are you on windows or Virtual Machines? If those programs do require huge space, I suggest that if ever you get an SSD, you moved those programs on a different disk.
I run Windows primarily, but use Linux when needed, as the EDA software is a case of Windows only for part of it (MultiSIM). LabVIEW is available on multiple OS's, but as I run them simultaneously (PITA to boot back and forth, and VM has issues with the bench test equipment accessibility).
Applications are stored on a SAS RAID 5, and data is another RAID 5, but on SATA disks. Both arrays get backups. The data is irreplacable, so such a setup is warranted.

But that's my usage pattern, and may or may not apply to others.
See, you miss the fact that technology becomes cheaper over time. heck even the PS3 is like 70% down the original manufacture price. It's not an estimate if it becomes a reality. Like in the Intel X25M G2, which runs around 229$ know, more than half the price (before was $799) and even faster.
Technology does become less expensive over time, but an estimate is an educated guess, not fact. If it happens, great, but it's not definite, and nothing changes that. PERIOD. If the estimate ends up matching the final result (factual evidence), then it's an accurate prediction, but there's no way to know this before it happens (product release).
I can't count the times I've seen product releases surface, and estimates ensue, to find they didn't match reality. Say the current '09 MP line for example. All the estimates where based on a combination of the '08 model, Intel's quantity price list ($ each @ Q = 1000), and parts costs derived from current offerings at the time. A few products new products as well (e-tailers published board and chip prices prior to the formal release near the end).
None of us imagined the current pricing model, as the estimates where quite a bit lower. So estimates are no guarantee it will be what ends up as reality. Sometimes it's accurate, close, or like landing in Clevland, when the destination was thought to be LA.

That's why they're only called estimates. It's not absolute or "set in stone" so to speak.
