I looked at the guys blog up to his first claim where he makes claims about the complexity of case sensitivity. The claim he makes there is nonsense. First, HFS+ stores filenames in canonical decomposed Unicode (you'd want one of the two canonical representation anyway, otherwise your file system will be in deep **** if you want to support Unicode, and you _do_ want to support Unicode), and that makes case insensitivity quite trivial. It _is_ trivial compared to creating a canonical representation.
But you are focused on the wrong use case--it's not about optimizing performance when the user is typing a name in a save or open panel, it's about the *millions* of times the file system perform filename comparisons in the process of regular activity, such as launching apps, copying files, Time Machine backups, etc. Did you read his post? Did you try his fs_usage example to see just how often the file system is accessed? Off the top of my head, this would happen for each open() and lstat() call you see, and probably others.
Take a simple example: TextEdit wants to save your preferences, which it will do periodically, so it opens /User/you/Library/Preferences/com.apple.TextEdit. To find the file, start with "User" compare it with each file in the root directory until you find a match. Then, compare "you" to each file in the /User folder, and so on. Each path component could result in dozens of (or more) filename comparisons (and I'm simplifying it). If the file system is HFSX case-sensitive (technically, HFS+ is always case-insensitive. HFSX is HFS+ with the option for either), then the comparison is simply a byte comparison, as noted here:
http://developer.apple.com/technotes/tn/tn1150.html#HFSX
If it's case-sensitive, then every comparison is a call to the FastUnicodeCompare() function:
http://developer.apple.com/technotes/tn/tn1150.html#UnicodeSubtleties
Also, CFString/NSString have multiple internal representations, including canonical, path-optimized versions. I haven't actually profiled the OS, but I suspect there is some (a lot?) of caching of these decomposed, canonical names within these objects and in other parts of the APIs.
Again, this isn't about the case when the user types a name--in that case the performance differences are completely inconsequential.
But, this is all speculation from both of us. What would be interesting is some actual benchmarks.
He suggests that case insensitivity should be handled at the UI level instead of the file system. Good luck doing that if you can have 2048 files called "MYLETTER.TXT" in all possible variations of uppercase and lowercase on a folder.
Umm, if the UI level is treating them all as the same file name, how did they get created in the first place? Certainly, a user could presumably do so on the command line, but let's keep this realistic--you are unlikely to have more than one or two conflicts in the unusual case of there actually being a conflict at all.
He suggests problems with standards (like Unicode) changing. Guess what: It is part of the Unicode standard that canonical representations will _never_ change. And it is part of the definition of HFS+ that its algorithm for case insensitive comparison will _never_ change.
I thought this discussion was about ZFS, too? So, if ZFS handles converting case for a case-insensitivity comparison slightly different than HFS+, then the user will have different experiences based on which type of file system they use. On the other hand, if both file systems are case-insensitive, then the Open Panel or Save Panel or Finder can ensure the experience is consistent regardless of the file system.