Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No, I constantly read the same ~100 gigs out of a total 1TB+.

[...]

Video editing is a good example. You may have a TB containing all footage, every take, but when you're editing you are going to pick selects so that some data will be read rarely if ever but the footage selected for the cut will get played over and over.
.

You'd probably still do better to buy a large MLC SSD, as a SLC one you'd need for caching would set you back thousands for 250gb.
 
How do you defragment the hard disk without trashing the SSD?

With the Momentus XT you don't have to worry about that.
 
What About SSD's & Thunderbolt?

Hi
Our customers are moving 20 GB video files around all the time and thus we're very interested in Thunderbolt. But if each end of the transfer is a hard drive, the fastest they can transfer is way less than Thunderbolt's top speed. So thought perhaps two computers with SSDs could more quickly send / receive the file, and let it trickle on to the hard drive over time, allowing users to transfer quickly and then leave faster than with a hard drive to hard drive transfer. Plausible?
 
Isn't the Network Your Bottleneck?

Hi
Our customers are moving 20 GB video files around all the time and thus we're very interested in Thunderbolt. But if each end of the transfer is a hard drive, the fastest they can transfer is way less than Thunderbolt's top speed. So thought perhaps two computers with SSDs could more quickly send / receive the file, and let it trickle on to the hard drive over time, allowing users to transfer quickly and then leave faster than with a hard drive to hard drive transfer. Plausible?

IF you're transferring over Gigabit Ethernet (GbE), current spinning hard drives have roughly GbE speeds, and commonly available eSATA dual-drive RAID-0 drives are certainly faster than GbE.

If you're not transferring over GbE - just what do you plan to use?

As far as "trickling on to the hard drive", that assumes that the SSD/HDD caching does completely write-back caching - and that it allows incoming writes to flush all of the read data in the caches. That's unlikely to be the case.
 
How do you defragment the hard disk without trashing the SSD?

With the Momentus XT you don't have to worry about that.

Cube, I guess you're concerned about polluting the SSD with data that is active during defrag, but would not benefit longer term.

Typically caching systems have algorithms to detect cache-unfriendly data, such as massive sequential transfers and possibly unique access types such as defrag.

I suspect, and this is conjecture, that Z68 tracks access patterns to the most highly used data in its own memory. It must have enough memory to address 64GB of SSD-resident data. What I would do with this memory, if I was designing this thing, is track the most frequently used blocks over a period of time using a most-frequently-used algorithm, with a push-down stack that allows inactive block addresses to be discarded off the bottom of the stack. I'd probably also include an aging counter in the metadata, and after a short period of learning, I'd start promoting blocks to SSD. This approach should preclude defragged blocks from being promoted unless the same LBAs were read often.
 
***Stupid Question Alert***

Is there any possibility to connect a small external SSD via TB (once the accessories are available) and use this feature?

Now that'd be cool.
 
Not practical

Is there any possibility to connect a small external SSD via TB (once the accessories are available) and use this feature?

Now that'd be cool.

Putting the cache on an easily disconnected external drive wouldn't be a great idea.

You'd be forced to disallow caching of writes (or have strict write-through instead of write-back). If the SSD disconnects, or even hiccups, it might be necessary to flush the cache and start over. Even putting the system to sleep might have to flush the cache.

I would expect that the Intel technology only works with the native SATA ports on the Z68 - and that it simply could not work on a different SATA controller on a PCIe extender (TBolt).
 
Cube, I guess you're concerned about polluting the SSD with data that is active during defrag, but would not benefit longer term.

Typically caching systems have algorithms to detect cache-unfriendly data, such as massive sequential transfers and possibly unique access types such as defrag.

I suspect, and this is conjecture, that Z68 tracks access patterns to the most highly used data in its own memory. It must have enough memory to address 64GB of SSD-resident data. What I would do with this memory, if I was designing this thing, is track the most frequently used blocks over a period of time using a most-frequently-used algorithm, with a push-down stack that allows inactive block addresses to be discarded off the bottom of the stack. I'd probably also include an aging counter in the metadata, and after a short period of learning, I'd start promoting blocks to SSD. This approach should preclude defragged blocks from being promoted unless the same LBAs were read often.

I am not concerned about polluting the SSD but about wearing it out after defragmenting the hard disk regularly.
 
I am not concerned about polluting the SSD but about wearing it out after defragmenting the hard disk regularly.

Do you really still not only defragment hard disks, ...
... but even do so regularly?
......and plan to keep doing that even after adding SSD caching?

Makes me curious about what kind of thing you do with those disks.
 
Do you really still not only defragment hard disks, ...
... but even do so regularly?
......and plan to keep doing that even after adding SSD caching?

Makes me curious about what kind of thing you do with those disks.

If you have virtual machines on a hard disk you must defragment both real and virtual disks.
 
If you have virtual machines on a hard disk you must defragment both real and virtual disks.

Do you set your disks (VHD's) to dynamically increase in size? That is a huge performance hit (in extra I/O), plus can be a cause for fragmentation.
 
The most interesting part of the AnandTech review:

Make no mistake, this isn't a hardware feature but it's something that Intel is only enabling on Z68. All of the work is done entirely in Intel's RST 10.5 software, which will be made available for all 6-series chipsets but Smart Response Technology is artificially bound to Z68 alone (and some mobile chipsets—HM67, QM67).

Which invalidates the conclusion of the MacRumors article:

It of course remains to be seen if Apple will even adopt SSD caching technology as an alternative to pricier standard SSD options, but the company's embracing of the Z68 chipset at least opens the door to the possibility at some point down the road.

Apple could have done SSD caching ages ago, it doesn't need a Z68 chipset (or other hardware feature) to do that. This SSD caching is purely a software thing, the only thing new here is that Intel is only making this available for a few select chipsets at this moment.

ZFS has supported software SSD caching for quite some time now, it's called L2ARC there.
 
Is there any possibility to connect a small external SSD via TB (once the accessories are available) and use this feature?

Now that'd be cool.

Intel offers two performance/integrity options. Best performance with deferred writes to hard disk, best integrity with write-thru. In this latter case, this might just work with a Thunderbolt connected SSD.
 
Nothing new - just a bigger version of Hybrid HD's

I am surprised it has taken Apple/Intel so long to get round to this. I have been using a Seagate Momentus Hybrid XT 500GB HD in my 13" 2.53 gHz for months now. It works brilliantly. I ran some back to back tests before it went in and then again after a week, when it had learnt the frequently used files. Start up is about 30% faster. General tasks like photo manipulation in PS CS5, about 25% faster and in general, I now rarely see our old enemy SBOD (spinning beachball of death) and my MBP seems much more responsive.

I wish Seagate did a 3.5' 2 or 2.5TB one for my iMac, as the 1TB disc I had put in there 3 years ago, is nearly full, what with 103 MB tiffs from my regular camera and MF and LF scans running often to double that.
 
Do you set your disks (VHD's) to dynamically increase in size? That is a huge performance hit (in extra I/O), plus can be a cause for fragmentation.

That is not the problem. When the real or virtual disks get filled up there will be fragmentation.
 
If you have virtual machines on a hard disk you must defragment both real and virtual disks.

Nothing like "must".
For example, if you use any kind of linked machines, clones, snapshots, or even sometimes sparse disk images, defragmenting can make things worse.
 
the SRT SSD caching is a compelling feature - and hopefully will encourage more users to buy...and force the price down...
 
Intel offers two performance/integrity options. Best performance with deferred writes to hard disk, best integrity with write-thru. In this latter case, this might just work with a Thunderbolt connected SSD.

If the TBolt link is disconnected, the machine reboots, or probably even if it sleeps - it is probably necessary to flush the cache.

Reason being that any of the above could mean that either the SSD or the HDD could have been modified during the disconnect. That means that the cache could be out of synch - and the easiest method to resynch is to flush the cache and start over. This is similar to what happens if you pull a disk out of a RAID-5 array and reinsert - that disk has to be completely rewritten. (A hardware/software RAID-5 can tell if the disk was pulled while the array stayed off, and not resynch on every power cycle or reboot.)


Nothing like "must".
For example, if you use any kind of linked machines, clones, snapshots, or even sometimes sparse disk images, defragmenting can make things worse.

The standard method is to defragment the virtual disk from the guest, zero free space from the guest, then defragment the host image (folding any snapshot or linked images to a new base container file).

You're absolutely right that simply defragmenting on the guest alone can make an expandable disk image much worse. Two main reasons for this:
  1. Even though from the guest the disk may appear very fragmented, on the host image there is temporal locality. As you're writing from the guest, the expanded segments are often close together. This can mean less physical head movement, although from the guest it looks like there would be a lot of head movement.
  2. The writes from the defragmenting may expand the disk further, and make the real disk more fragmented - and there's nothing to say that two adjacent clusters seen from the guest are on two adjacent cluster on the physical disk.
 
If the TBolt link is disconnected, the machine reboots, or probably even if it sleeps - it is probably necessary to flush the cache.

Reason being that any of the above could mean that either the SSD or the HDD could have been modified during the disconnect. That means that the cache could be out of synch - and the easiest method to resynch is to flush the cache and start over. This is similar to what happens if you pull a disk out of a RAID-5 array and reinsert - that disk has to be completely rewritten. (A hardware/software RAID-5 can tell if the disk was pulled while the array stayed off, and not resynch on every power cycle or reboot.)




The standard method is to defragment the virtual disk from the guest, zero free space from the guest, then defragment the host image (folding any snapshot or linked images to a new base container file).

You're absolutely right that simply defragmenting on the guest alone can make an expandable disk image much worse. Two main reasons for this:
  1. Even though from the guest the disk may appear very fragmented, on the host image there is temporal locality. As you're writing from the guest, the expanded segments are often close together. This can mean less physical head movement, although from the guest it looks like there would be a lot of head movement.
  2. The writes from the defragmenting may expand the disk further, and make the real disk more fragmented - and there's nothing to say that two adjacent clusters seen from the guest are on two adjacent cluster on the physical disk.
Does defragging even do anything useful on a raid array? It would seem unnecessary. Of course for a SSD I would think fragmentation wouldn't affect performance in a noticeable way.
 
Does defragging even do anything useful on a raid array? It would seem unnecessary. Of course for a SSD I would think fragmentation wouldn't affect performance in a noticeable way.

On a RAID, it reduces head movements for sequential transfers (and closely spaced random transfers). It is just as important for RAID as for JBOD if the disk is busy. (If the disk is mostly idle, RAID read performance is better than JBOD, so the cost isn't as noticeable.)

On an SSD, fragments don't cause slowdowns due to head movements, but they can still cause slowdowns due to the overhead of additional I/O calls needed to access scattered small fragments. Occasional (a few times a year) is good - no need for frequent defrags unless free space is extremely low and the disk has lots of activity.

If a file has thousands of fragments per GB, sequential access on an SSD will be noticeably slower than if it is contiguous.
 
Nothing like "must".
For example, if you use any kind of linked machines, clones, snapshots, or even sometimes sparse disk images, defragmenting can make things worse.

When simple VirtualBox VMs start thrashing because the real or virtual disks got severely fragmented, you MUST defragment them to be usable again.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.