Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Putting SSDs in my computer was THE most significant upgrade I have EVER put in a computer. It's a different world. Go for it.

Yeh I second that, gave my 2.5 yr old macbook pro a second life. Once you have an ssd drive as your boot, you will never revert back.
 
Yeh I second that, gave my 2.5 yr old macbook pro a second life. Once you have an ssd drive as your boot, you will never revert back.

I'm still waiting on my Mac Pro to have some first hand experience with this :) Already got my SSD too!
 
My friend who is system admin for a post production company suggests AGAINST using SSD's (for his type of setup). He has been all gung-ho over SSD's for the longest time and is just now changing his mind. This is what he posted on his site:
...Actually we had one problem – they were too fast! Specifically, the driver for the PCIe ethernet card couldn’t load properly during boot up. ...

...BTW, areas where I thought SSDs would help on an edit system, didn’t. Specifically, FCP projects didn’t load any faster...

Just some food for thought.

Errr. How is that remotely the SSD's fault? That sounds like either a Ethernet driver bug or a Mac OS X kernel bug. Either the kernel is being sloppy about concurrency during start up or the driver is sloppy depending upon the boot process to be slow enough to stablize when there should really be some sort of initialization lock/flag that holds network requests up until the device is initialized. Neither one of those though would be on the top of my list though. I'd put Ethernet card and any customer driver that ocmes with it much higher.


If they haven't submitted a bug report to the Ethernet driver and/or Mac OS X team that is a mistake because the root cause, if outlined here correctly, is not the SSD drive. The SSD is just enabling the symptoms of the defect.

Can't use a part in a Mac Pro because it performs within specs too quickly? LOL! That's kind of sad when you think about it.

Why would the FCP projects load faster if this is a SAN network (and the projects are presumably on the SAN )?


Neither does this seem to be unique to a SSD either. A 4-5 stripe RAID-0 is going to have throughput approximately as high as a SSD drive.
 
Errr. How is that remotely the SSD's fault? That sounds like either a Ethernet driver bug or a Mac OS X kernel bug. Either the kernel is being sloppy about concurrency during start up or the driver is sloppy depending upon the boot process to be slow enough to stablize when there should really be some sort of initialization lock/flag that holds network requests up until the device is initialized. Neither one of those though would be on the top of my list though. I'd put Ethernet card and any customer driver that ocmes with it much higher.


If they haven't submitted a bug report to the Ethernet driver and/or Mac OS X team that is a mistake because the root cause, if outlined here correctly, is not the SSD drive. The SSD is just enabling the symptoms of the defect.

Can't use a part in a Mac Pro because it performs within specs too quickly? LOL! That's kind of sad when you think about it.

Why would the FCP projects load faster if this is a SAN network (and the projects are presumably on the SAN )?


Neither does this seem to be unique to a SSD either. A 4-5 stripe RAID-0 is going to have throughput approximately as high as a SSD drive.

It's not that it is the SSDs fault, it was an issue that was hampering his workflow. Yes he understands it is ultimately not the fault of the SSD but without an easy fix it is smarter when your machines make you money to go with the most rock solid setup. He used the SSDs for months and found the speed increase was not worth the problems with stability and usability. I thought that was quite clear?

Dude I just quoted you what he said. I have no idea where he keeps the projects. He may keep them locally. I'm just pointing out a very specific issue in a very specific set of circumstances where having an SSD is NOT ideal. That was my only reason for posting this.
 
intel x-25m can be secure erased with 7 pass 0 overwrite. (apple disk ultility) If you do that the ssd will run almost as good as new. 99 percent as fast.

Not sure I can think of a better way to attack the wear leveling protection mechanisms of a SSD drive that are more effect than this. Writing 7 times to to EVERY logical block on the drive over and over again will wear it out faster.

Yeah sure you have zeroed out a large number of blocks, but unless the controller is smart and aborts the write ( because it is redundant) this will directly cause a wearing effect much more than normal period of disk read/write activity.

Sometimes SSD vendor package software were is more informed about how "zero out" their specific drive in a much more friendly fashion.


The real problem that SSDs have to get over is when do they get time to do garbage collection. The TRIM command pragmatically allocates some time to it by the OS file system code and to the drive. Pay a bit over time so the writes are faster later. The other way is to find time between read/writes to prep flash cells for faster writing later. If prep a cell for every cell used in a write in parallel you would not see any slow down. Can't do it perfectly in parallel but there are ways of using mulitple banks of flash to do multiple things at the same time (the newer drives with the internal garbage collection controllers are doing this. Issusing TRIM commands to these devices just allows them to get further ahead than just treading water. )
 
It's not that it is the SSDs fault, it was an issue that was hampering his workflow. Yes he understands it is ultimately not the fault of the SSD but without an easy fix it is smarter when your machines make you money to go with the most rock solid setup. He used the SSDs for months and found the speed increase was not worth the problems with stability and usability. I thought that was quite clear?

Dude I just quoted you what he said. I have no idea where he keeps the projects. He may keep them locally. I'm just pointing out a very specific issue in a very specific set of circumstances where having an SSD is NOT ideal. That was my only reason for posting this.

Then what should have been said, is it was taking too much of his time to track down the problem so he had an alternate way around the problem that did not involve wasted time. People who use things for work often do this, the old axiom "time is money" applies. H#@l I do it sometimes too. The reason I state this, reading about the problem before made buying an ssd for pro applications sound a little fighting and it's not at all.

There is nothing wrong with using ssd's within final cut, I use them sometimes, but it is not my bread and butter. Audio correction is, and SSD's makes my application seem like they are on steroids, couldn't care less about boot up time and shut down time, I make my coffee then, that is just an advertising gimmick for retail ssd's.
 
This is because it uses SLC Flash, which has a higher write cycle limit (10x higher = 100,000 cycles), rather than MLC. SLC was designed specifically for the enterprise market, which will be using them in high write environments.

There is > 100,000 MLC products rolling out into shipping devices now or soon depending upon vendor.

http://www.channelregister.co.uk/2009/10/19/microns_34nm_nand/

http://www.theregister.co.uk/2010/08/17/stec_mlc_sauce/


SLC has a problem coming in that flash is going to hit a geometry wall and better density is going to be bigger boost from layers rather than shrinkage.
Vendors are moving to push MLC into more "enterprise grade" contexts.
Howver, there are also SLCs now up in the 1,000,000 erase range too. (that gives you about 1,000 erases per day for about 3 years. )
 
There is > 100,000 MLC products rolling out into shipping devices now or soon depending upon vendor.

http://www.channelregister.co.uk/2009/10/19/microns_34nm_nand/

http://www.theregister.co.uk/2010/08/17/stec_mlc_sauce/


SLC has a problem coming in that flash is going to hit a geometry wall and better density is going to be bigger boost from layers rather than shrinkage.
Vendors are moving to push MLC into more "enterprise grade" contexts.
Howver, there are also SLCs now up in the 1,000,000 erase range too. (that gives you about 1,000 erases per day for about 3 years. )
Those Micron chips mentioned aren't all out yet (namely the MLC parts), or the article contained a typo (300k cycles listed in the article for MLC, when what's sold is 30k). The SLC parts are 300k.

It's an improvement, no doubt, but both articles are aimed at the enterprise market, which is likely to prevent it from hitting most consumer products just yet due to cost reasons. :(

Personally, I'm interested in FeRAM down the road, as it's capable of 1E16 cycles, uses less power, and has higher write performance. It still needs work to get it ready for mass production (rather low densities yet, and expensive), but it has some serious potential IMO.

Our MLC Enterprise NAND offers an endurance rate of 30,000 WRITE/ERASE cycles, or six times the rate of standard MLC, and SLC Enterprise NAND offers 300,000 cycles, or three times the rate of standard SLC.
Source.
 
Those Micron chips mentioned aren't all out yet (namely the MLC parts), or the article contained a typo (300k cycles listed in the article for MLC, when what's sold is 30k). The SLC parts are 300k.

Appears article contained typo. In addition to your pointer also on second scan found another about Intel's roadmap update (out a couple months before article on recent thread).

http://www.xbitlabs.com/news/storag...n_Next_Gen_Enterprise_Solid_State_Drives.html

that also has 300K and 30K. This article suggest that eMLC isn't so badly priced.

"SLC remains the more robust technology, but at about quadruple the cost and a quarter of the density of EMLC silicon."
http://www.hpcwire.com/features/Nim...et-with-Disk-Priced-Flash-Array-98841054.html

That companies eMLC product is priced around disk products. Unfortunately the wrapper around the disk ( enterprise grade SAN like stuff ) makes the costs so high hard to tell. (disks weren't major cost drivers before. )
Suggest that the eMLC drives will be higher but not the 3x-4x higher seeing now for SLC drives in the 2.5" and 3.5" SATA form factors.
Intel is waiting for 25nm parts to make the switch so it will take them till next year to ship.

Kind of curious though because it seems a number of folks (including Intel) have passed on 34nm eMLC and going straight to 25nm product. The 34nm is out. Either came out too slow (and missed the window were everyone is shifting to 25nm densities) or there was some volume/debug problem. Or a bit of both.


30K isn't bad if can put a 2x multiplier on it by using the controller somehow. That puts eMLC in similar ballpark as old SLC. ( 2 * 6 => 12x better than old erase cycles and pretty close the 100K watermark left by SLC. )
 
Appears article contained typo. In addition to your pointer also on second scan found another about Intel's roadmap update (out a couple months before article on recent thread).
Looks that way. :)

As per Lyndonville, the NAND type and die size weren't listed in the roadmap image from Intel (here). The Ephraim are shown to use 50nm SLC. Wonder what's going to actually be used (eMLC v. SLC)? :confused:

It should be a cheaper product if it's eMLC, but wouldn't be to the same write cycle limits as the current Enterprise product based on current gen SLC. I see it as a step backwards, unless eMLC becomes the "budget" enterprise section (much like the X-25V is for the consumer units).

I'll dig a little deeper, as I'm curious.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.