I think this has been implied earlier on the thread, but never stated outright: Ars suggested that Time Machine was a whole lot like ZFS, but probably wasn't ZFS. Right now, Time Machine likely uses a series of hard links in the BSD layer or something similar to keep the snapshots. Not a bad approach (it's how we do backups at work) but not very efficient for big files.Ars Technica has suggested that this would be required for Time Machine to function, but isn't Time Machine already included in the developer's previews?
Great, data storage as a primary cause of global warming... "Americans are 5% of the worlds population but they generate 35% of the worlds data" is just around the corner now...From the Wikipedia article: "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans."
Well, you can get last month's version assuming nothing else took that block for new storage...1) All operations are "copy on write".
data are never over written you can always go back to last month's version . The system saves space by only writing changes.
Nah. People have a lot of reasons for wanting separate data stores. I don't wanna plug in my digital camera and have it become "part of the pool". I like my firewire drives separate so I can power them down and shut them up. There will always be a place for human storage management, it's just that the pool approach is good for large data stores.Finally Mac OS can get rid of those disk drive icons. There can just be "storage" Much like you don't care much about how many RAM chips you have, you only care about the total amount of RAM. Disk can be like that too. Just open the box and slide in one more disk and the rest is "magic". Just like with RAM.
Statements like this are just asking to be proven silly... One ZFS file system can store the DNA sequences of every cell in every creature on the planet? The state of every electron in ever atom on the planet? The energy, frequency, and vector of every photon passing by the planet? 3e38 is big, but not infinite...4) there is not enough data on Earth to fil a ZFS file system. Or at least if you wrote to a disk 24x7 you would not live long enough to fill up ZFS. (although you would fill up quite a few physical drives)
Has this been wrapped in a tool anywhere? Seems like there's plenty of people who could use this now, but wouldn't want to touch the command line...You can do that now. It uses LVM, you can sync partitions without reboot, just use partprobe, pvcreate to make the partitions to volumes, suck them all together with vgcreate and use lvextend to make them bigger, hfsonline to make it. You can grow a drive to whatever you want.
You can also use pvmove, vgreduce, pgremove, if a drive is about to fail and you have another in sync as a mirror and bring it online.
This sounds like the right model to me. My external drives are individual units-- or at least one unit per enclosure. I don't want that assimilated into the Borg. Anything inside the case is considered "primary storage" and should flow together and cooperate.But moreover, if ZFS is the default for Leopard (that's a pretty big if, but not out of the question -- God bless "top secret" features) then I would imagine that it would automatically create a pool out of any internal drives but that external drives, by default, would not be considered part of that pool.
This is the kind of thing that Apple excels at: because the internal hard drives are inside the box, we should think of them as one -- together they are the storage space of the machine. But anything outside of the box, logically, seems as though it should be a separate part unless you specifically tell the computer otherwise. Thus, there could be a checkbox under "Get Info" or something similar to add that drive to the pool. I can't imagine that Apple would make it much more complicated than that.