Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Even still you aren't going to max that out on a Laptop.

Yep - I agree, no reason to buy anything with greater perf than this on a non-unibody MBP.

Now I'll just wait for the price to come down a bit, or people to start yelling that these things are failing after two weeks :)

Or, I'll just be impatient and order one...
 
Yep - I agree, no reason to buy anything with greater perf than this on a non-unibody MBP.
I would disagree w/ that. Some of the machines in the generation before the unibody macbooks have faster CPUs. Like if you got the highest end 15" pre-unibody and now you got the low-end unibody.

Plus, we're talking about disk speed. Even an old macbook would really benefit from one of these, provided you're going to keep it around for a while.
 
So, what are your XBench disk scores? (Not that XBench is a great benchmark)
I'll post them up later in the day. I'm doing a mozy backup, a fresh time machine backup, and a few work related activities now. Once I wrap that all up, i'll restart the machine and do xbench. yeah i don't like xbench that much either... but... at least it's something
 
I'll post them up later in the day. I'm doing a mozy backup, a fresh time machine backup, and a few work related activities now. Once I wrap that all up, i'll restart the machine and do xbench. yeah i don't like xbench that much either... but... at least it's something

Can't wait to see them! I'm still deciding on what to get for my MacBook :p
 
I'll post them up later in the day. I'm doing a mozy backup, a fresh time machine backup, and a few work related activities now. Once I wrap that all up, i'll restart the machine and do xbench. yeah i don't like xbench that much either... but... at least it's something

Dude, a mozy backup can take days! Where is your team spirit???? :p

I had vowed not to buy a SSD until SL was released, but I just can't take it anymore. I'm getting a new mac pro on Friday and want an SSD for boot drive. I'm debating on the intel 80gb vs. a larger one like you own.

I need info, and I need it NOW! Don't make me beg.
 
Sure... and the successor to that drive will be even faster... etc etc... but eventually one has to pull the trigger. A fast 256GB SSD dropped below my $500 price target so that was the sign for me!
Not exactly, SSD tech is not like CPU tech, where you can bank on the speed doubling every 6-12 months. The big difference between samsung's current offerings and their new enterprise offering is a fundamental difference in SSD tech, SLC vs. MLC. While SLC is geared twords the enterprise it is the stuff dreams are made of for tweaked out speed junkie end users (me?). But er, yeah. Go for the 256GB one under $250, just avoid the sh*tty gskill controller based SSDs (request: flame).
 
Not exactly, SSD tech is not like CPU tech, where you can bank on the speed doubling every 6-12 months. The big difference between samsung's current offerings and their new enterprise offering is a fundamental difference in SSD tech, SLC vs. MLC. While SLC is geared twords the enterprise it is the stuff dreams are made of for tweaked out speed junkie end users (me?). But er, yeah. Go for the 256GB one under $250, just avoid the sh*tty gskill controller based SSDs (request: flame).
I never said it was like CPU technology and follows Moore's Law. But is definitely dropping in price like a rock while speed is increasing. Furthermore, initial SSDs didn't include very much power optimization like that. And they're finding more tricks to get more and more performance out of MLCs as well. I think you're referring to the OLD G.Skill w/ the jmicron controller. The new one utilizes it differently to put internal SSDs into a RAID 0 striped config. Anyhow, this thing is really screaming on my system. I'll post more details later.
 
I never said it was like CPU technology and follows Moore's Law. But is definitely dropping in price like a rock while speed is increasing. Furthermore, initial SSDs didn't include very much power optimization like that. And they're finding more tricks to get more and more performance out of MLCs as well. I think you're referring to the OLD G.Skill w/ the jmicron controller. The new one utilizes it differently to put internal SSDs into a RAID 0 striped config. Anyhow, this thing is really screaming on my system. I'll post more details later.

Ok, it's later now. :apple:

One thing I've read is that some of these devices are much slower once they've reached steady state (i.e. all memory has been written to at least once). Therefore I wonder if you could fill the drive completely, then erase whatever you dont' need and do some benchmarks including things like copying large directories (itunes) and whatever else you wish.

Really appreciate it.
 
Speaking of stablization, so will the 4GB RAM single modules.

Sorry, I tend to get carried away with acronyms. Also, just to be clear, I'm waiting for Snow Leopard (SL) to be released first because I anticipate that the price of 250GB SSDs will stabilize, somewhat, around that time (mid-2009). If I'm lucky, then hopefully there will be a clear winner in terms of that tricky balance between performance and price. Truthfully, I'm excited about the SanDisk SSD, but my personal sweet spot would be $400.

I thought I should mention that so no one thinks that SL will adversely affect current SSDs or anything like that.
 
Therefore I wonder if you could fill the drive completely, then erase whatever you dont' need and do some benchmarks including things like copying large directories (itunes) and whatever else you wish.
Really appreciate it.
Haha no way. I didn't buy this drive to just test all day. I bought it to replace my old HD and that's what it's doing. It's now my primary drive in my computer. So I'm not going to be erasing it. In any case, I think this thing is rated for like 25 years. That's what it said on the box. Not sure exactly how many complete write / re-writes, but it's way more than i'll ever use. I'll sell this computer in a year to eighteen months when I want a new one. If you are interested in the results I have gotten, you can read my post here.
 
Ok, it's later now. :apple:

One thing I've read is that some of these devices are much slower once they've reached steady state (i.e. all memory has been written to at least once). Therefore I wonder if you could fill the drive completely, then erase whatever you dont' need and do some benchmarks including things like copying large directories (itunes) and whatever else you wish.

Really appreciate it.

Well, that's a red flag. Where have you seen or read that?
 
Ok, it's later now. :apple:

One thing I've read is that some of these devices are much slower once they've reached steady state (i.e. all memory has been written to at least once). Therefore I wonder if you could fill the drive completely, then erase whatever you dont' need and do some benchmarks including things like copying large directories (itunes) and whatever else you wish.

Really appreciate it.

Could you cite a source for that? I've never heard anything like that before.
 
No clue where pprior read that information from as I've never heard of such a thing.

As for personal experience, I've already filled up my SSD once and there is no noticeable decrease in performance. Everything still works great and the benches are still putting out similar results as when the drive was new.
 
No clue where pprior read that information from as I've never heard of such a thing.

As for personal experience, I've already filled up my SSD once and there is no noticeable decrease in performance. Everything still works great and the benches are still putting out similar results as when the drive was new.

I've read that in several reviews - apparently Intel fills their devices during testing.

I know for a fact it's mentioned somewhere in this thread, which is filled with people who know far more than I on this topic:

http://forum.notebookreview.com/showthread.php?t=208242&page=235

I'm just repeating what I've read, but apparently after the SSD has been filled, the device then has to rewrite and the allocation and wear leveling starts to happen and this can slow things down, especially with some chips.

Again, survey the thread posted, it's in there, which is one reason why some people may get overly optimistic benchmarks on new SSD drives but then later performance will drop.
 
I believe what you're talking about is that all SSDs have a limited amount of write / re-writes. However, this is a really high number. And even a LOT of disk access will likely never max that out for years and years. I never keep a computer for more than a few years so by the time I sell this one, I'm sure there will be faster SSDs with longer lifetimes and lower prices by then! :D
 
No, I'm not talking about lifetimes, I'm talking about the fact that once an SSD has been written to full then after that cells must be deleted as new data is written and that this point the device is considered at "steady state"

Google Steady State and SSD and you can read about this. I may not understand it fully, but I'm not making this stuff up....

(and this is why I asked you to fill your drive and THEN comment on performance - that will give us [and you] an idea of what real world performance will be after a bit of use).


http://forums.anandtech.com/messageview.aspx?catid=83&threadid=2269016&enterthread=y&STARTPAGE=7

Intel explains the situation thus:
SSDs all have what is known as an “Indirection System” – aka an LBA allocation table (similar to an OS file allocation table). LBAs are not typically stored in the same physical location each time they are written. If you write LBA 0, it may go to physical location 0, but if you write it again later, it may go to physical location 50, or 8.567 million, or wherever. Because of this, all SSDs performance will vary over time and settle to some steady state value. Our SSD dynamically adjusts to the incoming workload to get the optimum performance for the workload. This takes time. Other lower performing SSDs take less time as they have less complicated systems. HDDs take no time at all because their systems are fixed logical to physical systems, so their performance is immediately deterministic for any workload IOMeter throws at them.

The Intel ® Performance MLC SSD is architected to provide the optimal user experience for client PC applications, however, the performance SSD will adapt and optimize the SSD’s data location tables to obtain the best performance for any specific workload. This is done to provide the ultimate in a user experience, however provides occasional challenges in obtaining consistent benchmark testing results when changing from one specific benchmark to another, or in benchmark tests not running with sufficient time to allow stabilization. If any benchmark is run for sufficient time, the benchmark scores will eventually approach a steady state value, however, the time to reach such a steady state is heavily dependant on the previous usage case. Specifically, highly random heavy write workloads or periodic hot spot heavy write workloads (which appear random to the SSD) will condition the SSD into a state which is uncharacteristic of a client PC usage, and require longer usages in characteristic workloads before adapting to provide the expected performance.

When following a benchmark test or IOMeter workload that has put the drive into this state which is uncharacteristic of client usage, it will take significant usage time under the new workload conditions for the drive to adapt to the new workload, and therefore provide inconsistent (and likely low) benchmark results for that and possibly subsequent tests, and can occasionally cause extremely long latencies. The old HDD concept of defragmentation applies but in new ways. Standard windows defragmentation tools will not work.

SSD devices are not aware of the files written within, but are rather only aware of the Logical Block Addresses (LBAs) which contain valid data. Once data is written to a Logical Block Address (LBA), the SSD must now treat that data as valid user content and never throw it away, even after the host “deletes” the associated file. Today, there is no ATA protocol available to tell the SSDs that the LBAs from deleted files are no longer valid data. This fact, coupled with highly random write testing, leaves the drive in an extremely fragmented state which is optimized to provide the best performance possible for that random workload. Unfortunately, this state will not immediately result in characteristic user performance in client benchmarks such as PCMark Vantage, etc. without significant usage (writing) in typical client applications allowing the drive to adapt (defragment) back to a typical client usage condition.

In order to reset the state of the drive to a known state that will quickly adapt to new workloads for best performance, the SSD’s unused content needs to be defragmented. There are two methods which can accomplish this task.

One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare” the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like” method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data” and causes the drive to quickly adapt for a typical client user workload.

An alternative method (faster) is to use a tool to perform a SECURE ERASE command on the drive. This command will release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state. This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.