Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The lacking features have been mentioned several times! Read the entire thread, THEN post something!
 
What the world needs is a modern day opensource FS with drivers for OS X and Windows (which can somehow be used during install).
 
Yep. I really hope Apple uses BtrFS, that would lend it the "gravitas" to become the One True Filesystem. You know, if you're religious ;p
 
Is it my imagination or is there a history of potential partnerships between various companies and Sun turning sour? Not trying to make Sun out to be bad guys or anything but I'm pretty sure this isn't the first time I've heard a story about Sun's technology being withdrawn from some product or project because of difficulty in agreeing on licensing terms. I just honestly can't think of an example off the top of my head, but this has a deja vu kind of feeling to it.
 
Personally I was looking forward to being able to use a modern, efficient and featureful file system in a userland that doesn't suck (i.e. not Solaris). Let's be honest: HFS is showing its age, and there's only so much lipstick you can smear on anything before it starts to look a bit silly.
I'll disagree with you a bit on the Solaris userland sucking. A few tweaks (change default shell, editor, install a few GNU utilities) and its userland is on par with OS X's BSD userland. Most of the defaults are there for historical reasons for long-time Solaris admins. The package manager sucks, but Sun engineers have been hard at work on its replacement.

What Solaris lacks is a pretty windowing system, although it has gotten a lot better with JDS. But let's face it, most people don't use a Solaris machine as their primary desktop (I use a Power Mac G5 and SSH in, or use remote X).

Solaris has a ton of stuff that makes it a better server OS than just about anything else out there. If you want something that can take a beating and still have amazing uptime, Solaris is it.
 
Then you're suffering from a lack of imagination:

  • Lightweight snapshots ala ZFS are much preferred to monkeying around with directory hard links. If Time Machine were implemented with snapshots then a single-byte change to a large file would result in just one block being backed up rather than the entire file;
  • Writable snapshots mean an instant "working copy" can be made of an entire directory tree. If you screw something up you can roll back instantly. You can even have multiple incarnations of the same tree, all sharing the unmodified blocks thus minimizing the space consumed;
  • ZFS's on-the-fly compression results in smaller files on disk, which results in greater throughput -- often much greater, depending on the data;
  • ZFS adapts to the characteristics of the installed physical devices, meaning if you have a small-but-fast SSD and a big-but-slow HDD installed in your computer then ZFS will store frequently accessed files on the SSD. This has the potential to speed up much disk IO by orders of magnitude;
  • ZFS's checksumming and online scrubbing eliminates the possibility of undetected data corruption between the platter and RAM. If you value your data then this is an important feature;
  • ZFS is transactional, which eliminates the possibility of file system corruption due to kernel panic, power failure etc. and means no waiting around for a file system check on next boot.

To sum up: ZFS is faster, more reliable and less smelly than HFS. That's a lot of win even for desktop machines.

I have no argument with the impressive feature set of ZFS, but it's debatable how many of those offer real value to the end user on a desktop. Here are some counterpoints, in order:

  • Apple could have used a lower-level diff'ing algorithm for Time Machine if they wanted to. Modern SCM's do it without regard to the underlying file system (see Subversion), so lack of ZFS is not a limiting factor here. Also, I'm not sure how wise it is to develop a consumer-oriented backup app that's intrinsically tied to a particular FS. A higher-level, file based approach has some advantages in terms of simplicity and reliability (e.g. you don't need the entire version history to rebuild every file).
  • Would end users or consumer applications see any real benefit from writable snapshots? This sounds more like a server-side feature.
  • File compression was added to HFS+ in Snow Leopard (see ditto --hfsCompression). As an add-on, it's not terribly powerful or elegant but it does provide a means to save space and increase throughput. This is obviously a user-visible feature so I agree it applies to desktops as much as servers.
  • How many desktop users have both an SSD and a hard disk? Of those, how many would want to combine them into a single pool? My guess is very, very few, if any at this point. And I expect desktop machines to simply switch from HDD to SSD in the future rather than mixing the two. HDD will still be used for backup, but that wouldn't benefit from on-the-fly physical adaptation.
  • How many data corruption issues that would be prevented/solved by checksumming do desktop users actually experience? I don't know the answer, but I wonder if you do either. My understanding is that current unrecoverable HDD errors are in the neighborhood of 1 in every 10-20 terabytes. That's pretty significant in a data center with petabytes of data, but not so significant on a desktop (not yet, anyway).
  • I don't see how a transactional FS is important for a desktop. A journaled and recoverable FS, yes, but a transactional FS is only really valuable if you manage petabytes of data where a disk integrity check would mean lots of costly downtime. It would be a "nice-to-have" for desktop users, but hardly a "must-have".

Personally I was looking forward to being able to use a modern, efficient and featureful file system in a userland that doesn't suck (i.e. not Solaris). Let's be honest: HFS is showing its age, and there's only so much lipstick you can smear on anything before it starts to look a bit silly.

Is HFS+ really negatively impacting your real-world experience with OS X on the desktop? Let's be honest: If you replaced HFS+ with ZFS on, say, a graphic designer's iMac, would they notice the difference? No, and neither would all but the most technically inclined Mac users.

Don't get me wrong, Apple needs to look around at replacements for HFS+, and their foray into ZFS shows that they're doing just that. But I don't think the loss of ZFS on Mac OS X is any kind of disaster. It's pretty clear from this interview with the lead ZFS engineers that their primary design goals were based on the needs of data centers and other server environments, not desktop users and certainly not OS X users. So I'm not at all convinced that it's the best choice to replace HFS+.

Add to that the more plausible theories about why Apple dropped it (legal fears induced by the Sun/NetApp patent lawsuits, restrictive licensing and Oracle's potential abandonment of ZFS for Btrfs), and it starts to look like they may have dodged a bullet. They certainly don't want to be left out in the cold with a "killer" server-oriented FS and no enterprise partners to help support it.
 
I wish someone could explain in simple terms why this ZFS is the biggest thing that everyone should jump on since the invention of the transistor.

How does ZFS improve things for the average Mac user over the current system?

ZFS uses the fast CPU and RAM on modern machines to perform aggressive performance optimisations. It's also capable of running a single "volume" on multiple physical drives, and takes into account the speed of each drive when deciding what file goes where. This would lead to a noticeably faster system with the potential to be *extremely* fast if you spend a bit of money on hardware designed to work with ZFS (for example, a 500GB hard drive and a 30GB SSD drive as a single disk with 500GB of space would deliver incredible performance on ZFS, and is either impossible or a waste of money on other filesystems).

ZFS also takes advantage of your fast CPU to do error correction that was impossible on the hardware of a decade ago. A faulty hard drive is far less likely to result in lost data when you're using ZFS. Even expensive data center RAID systems are less capable of dealing with hardware failures than ZFS. This will lead to less data loss if you don't have a good backup system, and you won't need to restore from backups as often if you do.

Also, features in Mac OS X like Time Machine and Spotlight are not suited to a traditional filesystem, making them extremely complicated, which leads to poor performance/bugs/security holes. ZFS is designed from the ground up to be a good mix with features like that and would deliver faster and more reliable spotlight and time machine.

If you put all of that together, the end result would be faster macs, more reliable macs, and safer macs ("safe" as in security, and protection against bugs/mistakes/hardware failures).

Also, time machine/spotlight on zfs would give you better battery life on notebooks. Notice that time machine can be turned off when you're running on battery power? Spotlight indexing should probably have that option too, but it doesn't since most users don't even know what "indexing" is, let alone what the setting should be. There would be no need to turn them off on ZFS.
 
Apple technically isn't abandoning ZFS (which is actually called Z, not ZFS). Over the last two years there has been a successor developed to Zfs, designed to address the flaws as well as licensing issues of Zfs.

Apple will be taking this project and customizing it, much like it does with it's OpenBSD and FreeBSD.

It will become a standard with all the great features of Zfs, and will be used by many OS vendors, as well as SAN and NAS systems.

Apple isn't going back on giving you the features of Zfs, it's using the new offshoot project, that addresses all the concerns of Zfs.

From what I've heard Apple plans to present a free open-source standard, loosely based of Btrfs (similar to Zfs) but heavily modified, and will have intel push vendors to use the new format apple developers and starts shipping as an option in the next OS
 
Apple technically isn't abandoning ZFS (which is actually called Z, not ZFS). Over the last two years there has been a successor developed to Zfs, designed to address the flaws as well as licensing issues of Zfs.

Apple will be taking this project and customizing it, much like it does with it's OpenBSD and FreeBSD.

It will become a standard with all the great features of Zfs, and will be used by many OS vendors, as well as SAN and NAS systems.

Apple isn't going back on giving you the features of Zfs, it's using the new offshoot project, that addresses all the concerns of Zfs.

From what I've heard Apple plans to present a free open-source standard, loosely based of Btrfs (similar to Zfs) but heavily modified, and will have intel push vendors to use the new format apple developers and starts shipping as an option in the next OS


Source?
 
Blunt Generic Comments FTW!!! Always brings out the people that have the need to reinforce themselves.

Just because you can list a big long line of credentials doesn't mean you can put away bias on a subject. Just like just because you have a degree in Law doesnt mean you're very lawful.

For the record I'm a student in computer science. :D
Let me know when you want to have a reasoned discussion rather than engaging in personal attacks.

MSFT is full of people with a computer science degree and yet they seem to have difficulty coming up with good software. That is just a thought that you should consider. A degree will equip you with skills but is no substitute for experience or talent.

I would suggest every developer starting out begin with Web development as it is directly customer facing which can give you a better understanding of how to design user friendly interfaces and provide you with insight into the general psychology of a typical user.

A CS degree will not teach you any of that.

PS. I have to side with NSMonkey on this issue. A lot of you are obviously enthusiastic about the potential of ZFS and have bought into the hype without considering the potential downside or the limitations compared to HFS+.
 
Ummm... blah blah...

It depends on the pragmatics of what they're attempting to attack. What they've learned in their experiences might allow them to improve upon what ZFS was trying to achieve and what would benefit most of it's users. Consider that it didn't take Apple nearly as long as what Sun has taken on ZFS to make HFS and HFS+ work. True, both of those file system's aren't nearly as complex or all encompassing as ZFS but the question that begs to be answered... does anyone truly need all of that which ZFS is providing and/or can a simpler solution be conjured that provides the vast majority of features without all of the achilles heels/hurdles to a practical solution that scales to fit within the Mac OS X operating system (and potentially other systems) more effectively?
This makes me think. Fifteen years ago we would have been discussing whether or not anyone truly needs preemptive multitasking -- I mean cooperative multitasking works just fine. Memory protection? My Mac doesn't crash that often. That's the sort of thing you might need on a server... but on a desktop?

Consider that even Sun has run into issues with trying to get ZFS to work within bootable environments and how even more daunting it has been to just graft ZFS onto other environments, including presumably OS X, ZFS might not nearly be "The Holy Grail" everyone has seen it as. True there's a lot to like/love about ZFS... but if it's flaky and temperamental and/or licensing is draconian or prohibitive... or if it muddies the user experience and requires them to have knowledge of when to use file system x vs. file system y, it's not intuitive to the primary target market (i.e. personal computers).
What are the issues with ZFS booting that Sun has run into? I have a server at work that boots and roots off a ZFS pool with mirrored drives. Works fine. If you're talking about ZFS originally not being a boot/root filesystem, that is more of an issue of putting in the work. There weren't really "issues" at all... just a lot of work. Consider that any modern OS boots like this:
System firmware loads and after running POST, searches for bootable devices. Once it has selected a bootable device, it has to be able to read enough off the disk to load a bootloader. The bootloader has to be able to read the volume and filesystem, so it can load the OS kernel. Once control is transferred to the bootloader, it must then read the filesystem and copy the kernel and possibly a ramdisk (depending on the OS) into RAM.

Root filesystem support only is easier than boot and root. HP UNIX comes with Veritas Filesystem (VxFS) and Volume Manager (VxVM), but last I knew, doesn't boot from it. You have to have a disk with a standard or HP LVM configuration and that disk needs to have a Hi Performance FileSystem (HFS -- no relation to Apple HFS) slice on it. When the system is booted, the HP firmware loads the HP ISL (initial system loader), located on a bootable volume. The ISL is located on an HFS slice (/stand). The ISL then loads /stand/vmunix, the HP-UX kernel. The kernel loads and has the VxFS and VxVM drivers. Once these drivers are loaded, the kernel mounts the root filesystem, which can be VxFS, and normal booting ensues.
 
MSFT is full of people with a computer science degree and yet they seem to have difficulty coming up with good software. That is just a thought that you should consider. A degree will equip you with skills but is no substitute for experience or talent.

I would suggest every developer starting out begin with Web development as it is directly customer facing which can give you a better understanding of how to design user friendly interfaces and provide you with insight into the general psychology of a typical user.

A CS degree will not teach you any of that.

No degree in any field will fix a lack of talent for the subject.

Microsoft's problem hasn't been the talent of the people they hire. I am not a fan, but the photo work and other projects are impressive. Their problematic file system was actually a database and had a very poor management structure. They didn't have a good focus or true specific requirements. The story of Longhorn is not a story of poor talent, it is a story of poor organization and management.

I would say your comments about a CS degree are rather off. Many schools teach human interaction as part of the degree. I would suggest no developer start with Web development. Web development has so many compromises that it is a very hard field for a starting developer to get right. Plus, the infrastructure and need for more than one discipline to do anything of note will have severe effects on the learning. Start with something like iPhone / Android / Palm Pre development. Small project, extreme need to concentrate on a good user experience, and very specific scope.
 
Let me know when you want to have a reasoned discussion rather than engaging in personal attacks.

I'm not the one taking offense to generic comments not even aimed at you... While going off onto tangents about how many skills you have. ;) I was referring to the people you talk about in your last paragraph. :rolleyes: The ones who think ZFS is the one and only future for Mac OSX. You know...

Give them what they want and they might be quiet for a few... seconds.

MSFT is full of people with a computer science degree and yet they seem to have difficulty coming up with good software. That is just a thought that you should consider. A degree will equip you with skills but is no substitute for experience or talent.

I never argued differently... or at all. :confused: I'm not the one who brought it up.

I would suggest every developer starting out begin with Web development as it is directly customer facing which can give you a better understanding of how to design user friendly interfaces and provide you with insight into the general psychology of a typical user.

A CS degree will not teach you any of that.

Are you my old computing teacher?

PS. I have to side with NSMonkey on this issue. A lot of you are obviously enthusiastic about the potential of ZFS and have bought into the hype without considering the potential downside or the limitations compared to HFS+.

No, its just they see 16EB or Apple supporting it and its an instant win! Ext4s 1EB limit is far more than what we need.
 
I have no argument with the impressive feature set of ZFS, but it's debatable how many of those offer real value to the end user on a desktop.
It really makes me sad to read something like this.
  • Writable snapshots (clones): They are VERY useful even on a desktop. For example you could install an os upgrade into a separate boot environment (clone of your system disk). Then after the upgrade you can boot both the old and the new boot environment: System upgrades with minimal downtime and no risk at all. You can try this easily today by upgrading OpenSolaris in a VM. Try it and then come back and tell me you still don't want to have this feature on your desktop.
  • SSD+HD on the desktop: I'm already using this combination right now on my desktop. SSDs will be very common soon. ZFS's usage of SSDs for read and write caches is very good. You could buy a tiny SSD (big enough to hold you working set) and notice the advantage of lower latencies. BTW: Ask anyone who already uses a SSD on the desktop if he wants to give it up. Quite the contrary: You won't stand the slow speed of hdisks anymore.
  • Data corruption: You're answering your own question: Often you don't notice that your data got corrupted because there is no checksum. Given today's hdisk capacities I don't want to use a non-checksummed filesystem anymore. Why take the risk and lose a x TB filesystem?
After using a recent version of ZFS for some time you don't want to use anything else anymore. ext4, HFS+, JFS, NTFS, etc. are at least a generation behind.

Also, I think it is very remarkable that all this power of ZFS is exposed with beautiful and simple command line interface: There are only two commands to learn: 'zpool' and 'zfs'. (Don't forget that ZFS is both a filesystem and a volume manager.)

That's why Apple decision to drop ZFS is the disappointment of the year for me.
 
Maybe the engineers at apple decided they'd be better off designing their own File System. Who knows?
 
FreeBSD Kernel > Darwin Kernel

Porting Aqua/Cocoa to FreeBSD is easier then rewriting a new file system. ZFS is production ready in FreeBSD 8.
 
Check zfs-discuss

Bonwick already confirmed what many suspected:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-October/033125.html

there were a couple of ex-sun ZFS engineers over at apple working on this last i checked that were running the community .. i suspect that Ellison couldn't work out a decent licensing deal with apple given all they wanted/needed to change.

oh, and there's a fork of the project that a few zfs macosforge folks put out on google code - so i suspect that it will continue to live .. would be nice though to get some of the changes the engineers were putting in more recently for the SL launch.
 
Okay... so Apple did see ZFS as "risky"

We dont need Mac OS X booting from ZFS... That was always an overly optimistic IMHO... I think most of us at least wanted built-in Apple-supported ZFS read/write support in SL (and they were virtually there on delivering that before it was pulled). Its silly that 10.5 Leopard supports reading ZFS volumes and SL doesn't even manage that. We dont even really need Disk utility holding our hands with ZFS as it can be assumed that if you are using it you are a power user using the command line.

Snow Leopard was always about under-the-hood changes, and this was the killer feature which just didn't happen...
 
BTW: Ask anyone who already uses a SSD on the desktop if he wants to give it up. Quite the contrary: You won't stand the slow speed of hdisks anymore.

That's why Apple decision to drop ZFS is the disappointment of the year for me.
This sounds a whole lot like something Steve would not give up either. Apple has formally dropped a particular branch of effort. They have not lost interest in having a new file system at all, in any of the features Z-FS offers, or any of the software and hardware that has been developed assuming it exists.

If Light Peak and Firewire and iPod and iPhone are any indicators, they are going to go "inside and proprietary" with their strategy so it happens sooner, better in their opinion, and has not well discussed features Apple itself will exploit with first to market future features.

By having all the major product fab vendors for CPU and memory and more in their pocket means whatever they agree to behind the scenes will be adopted in actual practice, some of which may propagate to the Wintel world.

Rocketman
 
There is always the HAMMER FS from DragonflyBSD which could be used:

http://en.wikipedia.org/wiki/HAMMER

it is BSD licenced, delivers alot of what ZFS promises; there are some things that need to be added to make it more comprehensive.

Oh, and then there is the ability for Steve to say, "its hammer time!" when he announces 10.7.
 
I dont think there were any major issues... I havent been able to make ZFS crash the system... You can create/export/import storage pools with no problems... mirrored storage... take disks offline, replace disks, create zfs filesystems, change mount points... its all good!
 

Attachments

  • Screen shot 2009-10-25 at 15.35.40.png
    Screen shot 2009-10-25 at 15.35.40.png
    881.5 KB · Views: 172
http://devwhy.blogspot.com/2009/10/loss-of-zfs.html

------------------------------------------------

The loss of ZFS

Saturday, October 24, 2009 at 1:03 PM | Posted by Louis Gerbarg

Well, in case you haven't read any of the myriad stories about it, it appears that Apple has decided not to use ZFS on Mac OS X. Gruber has sources that say it was primarily licensing concerns, which is consistent with what people have implied to me, both recently, and around WWDC (although at that time I think there was probably still hope of resolving the issues).

Now, some people jump may comment that it couldn't be licensing issues, since ZFS is opensource (under the CDDL), and that Apple already uses CDDL software (DTrace). That may be true, but often in deals that involve large companies there is more to it than that. Apple may have wanted guarantees of indemnification in the NetApp lawsuit. Maybe it wanted guarantees that certain modifications it wanted to make would be accepted upstream, or even to get Sun to make certain changes. It also might have wanted additional distribution rights that were not granted under the CDDL. It is typical for companies to negotiate custom agreements in such cases (and for some money to change hands), so the idea that licensing issues are why it fell through is entirely reasonable, even though it is an opensource product. Obviously Sun's steady decline in the market place, and the uncertainty caused by the Oracle acquisition may have greatly complicated any such negotiations.

Why not do a new filesystem?

Apple has a lot of talented file system engineers. They are certainly capable of doing something comparable to ZFS, at least for their target market. The problem with developing a new modern filesystem is that it generally takes longer than a single OS release cycle. Most companies are really bad at having large teams focused on projects that will not ship in the next version of the project they are working on.

This is a particularly acute problem at Apple, which traditionally has done things with very few engineers. I don't want to get into exact numbers, but I recall having a discussion with the head of a university FS team who was discussing the FS he was working on. He was pitching it to a group of Apple engineers. It was some interesting work, but there were some unsolved problems. When he was asked about them he commented that they didn't have enough people to deal with them, but he had some ideas and it shouldn't be an issue for a company with a real FS team. It turned out his research team had about the same number of people working on their FS as Apple had working on HFS, HFS+, UFS, NFS, WebDAV, FAT, and NTFS combined. I think people don't appreciate how productive Apple is on a per-engineer basis. The downside of that is that sometimes it is hard to find the resources to do something large and time consuming, particularly when it is not something that most users will notice in a direct sense. That is especially true if senior management is not excited about the idea.

Because of that, I was fairly convinced ZFS was a credible future primary FS for Apple. Not because it was an optimal design for them (it isn't), but because it was a lot less work than doing a new design from scratch. The fact its fundamental architecture is 20 years newer than HFS meant it would still be better than HFS+ in almost all respects even if it was not designed for Apple's exact needs. Clearly I was wrong, since Apple has stopped the ZFS project.

What changed?

Well, a couple of things have happened. The first is that Mac OS X has gotten more mature. They no longer need to port all of those FSes, they already have them working, and in most cases they work fairly well. That frees up some engineers. Apple has also greatly expanded the number of people working on their kernel work on some parts of it can be amortized over many different products (Mac OS X, iPhone, AppleTV, etc).

Suddenly the notion of doing a new filesystem seems doable, so long as it is a real priority and the FS team doesn't get pulled to keep adding features or doing major work to legacy FSes. That is still a lot of work when Apple had ZFS approaching production quality on OS X.

Apple can do better than ZFS

Sun calls ZFS "The Last Word in Filesystems", but that is hyperbole. ZFS is one of the first widely deployed copy on write FSes. That certainly makes it a tremendous improvement over existing FSes, but pioneers are the ones with arrows in their back. By looking at ZFS's development it is certainly possible to identify mistakes that they made, and ways to do things better if one were to start from scratch. From where I sit, there are 3 obvious ways doing a new FS will be better for Apple than ZFS:

There are has been new fundamental research since ZFS was designed that simplifies many of the issues involved with it. In particular the "B-trees, Shadowing, and Clones" (PDF). That paper is the basis for the design of BtrFS, which has a very similar feature set to ZFS, but internally is entirely different. LWN has an article about BtrFS that explains the significance in some detail (it is written Valerie Aurora, who worked on ZFS at Sun).

ZFS was designed for the storage interfaces available a decade ago. Spinning disks are going to be with us for a long time, especially for bulk storage in data centers and on backup devices. The future is all about solid state. Flash SSDs have significantly different performance characteristics than spinning media, and there may be FS design decisions one could make that would benefit from that. Now, any FS Apple designs will have to work acceptably on traditional drives, but if they are designing for the future then flash is where to be. ZFS has had some optimization work for flash, but it is all in terms of using flash as part of a storage hierarchy. That makes completely makes sense, since ZFS's primary deployment targets are high-end systems and data center storage. Those systems have multiple drives, so the idea of separate flash drives for a ZIL and L2ARC are completely reasonable. Most consumers have one drive in their system, and maybe an external drive for bulk data, data exchange, and backup.

That brings up the last point. ZFS is designed for big systems. It works on small systems, but most of the tradeoffs favor very large computers, with lots of drives. This shows up in a number of ways. The first is that ZFS is not currently capable of adding single drives to an existing vdev or migrating vdevs between various types (mirror, raidz, raidz2). This is a major feature for smaller users who might want to add a single drive, but is a non-issue for data center users who tend to add large number of drives all at once, since they will add whole vdevs. Another issue is that ZFS assumes you have a lot of ram. NEC has been doing a port of OpenSolaris to ARM, and they determined they could not get ZFS to use less than 8 megabytes of ram without making incompatible format changes (Compacted ZFS). With those changes they could squeeze it into a more reasonable 2 megabytes. On a desktop that doesn't seem like a big deal, but on an iPhone 3G or a Time Capsule 8MB of wired memory is an enormous issue.


The only major downside is that if Apple is just starting on a next generation FS now it could be a long time before we get our hands on it.

But now we are going to have another incompatible next generation filesystem

Wolf brought this up during some of the ZFS talk on twitter yesterday. My general opinion is that it doesn't matter. People use drives for two largely unrelated tasks. One is running their computers. This is fixed storage. The other is for data exchange. In the old days people used floppies for their sneakernet media, which made the situation much simpler to understand. In recent years the market realities have caused people to move to using SD cards, thumbdrives, and hard drives as the exchange medium of sneakernet.

The important point is that understand is that while the physical devices may be the same, the use model is different, just as the use of a floppy disk and an internal hard drive were different. Nobody would balk at the notion that floppies should use different FSes than internal drives. Likewise, most people shouldn't care that their external drives are formatted differently than their internal drives.

There are complicated features you want for your boot drives and system disks. Ideally you could have them on your interchange disks, but there are other features that are more important, particularly interoperability, and simplicity. ZFS didn't bring either of those. There might have been a few people who were psyched to be able to use ZFS to share disks between a Mac and a Solaris or FreeBSD box, but honestly those people are few and far between. Whether Apple used ZFS or something else it is just as interoperable with Linux and Windows (which is to say, not at all). So that fact that Apple looks to be doing a new FS does not impact interoperability in any real sense.

The other feature you really want for an interchange FS is simplicity. There are a lot of devices out there that use an FS to communicate with a computer. The simplest example is a digital camera via its media cards, but there are many others. Something like ZFS is way to complex for those devices, and honestly most of the features of ZFS like multiple drive support and snapshots are useless since the devices don't have the physical interconnects or user interfaces to expose those features. There is certainly an argument to made that we could use something a bit better than FAT32 or exFAT as that format, but ZFS was not the right solution for that.

In other words, for that disk you want to use as an external drive to drag between computers you don't want something like ZFS, you want something that is simple enough that a firmware engineer can write a read-only implementation from the specs in less than a week. For the disk embedded in your computer (operationally or literally) you want something like ZFS, but it doesn't matter if it is interoperable with anything else because you won't be moving it between systems.

This is basically how Windows works. Microsoft generally uses NTFS for internal drives, but FAT for external drives. Ultimately somebody should design a filesystem explicitly for use as an interchange format and license it for free, then everyone can deal with their internal FSes and do what makes the most sense for their OSes and markets.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.