While Squeeze is a nice idea, there are some things about how it applies HFS+ compression that need to be fixed before I would consider using it.
GothAlice said:
The fact that it will use the xattr (stored in the directory entry for the file) for small files is gob-smackingly nice; since the read head is most likely going to be near the extents file (that stores directory listings) on-disk, reads of small compressed files would be faster than reading the file normally even accounting for the decompression step.
Unfortunately it seems that Squeeze skips very small files entirely. In addition the small files that it does compress it doesn't store properly - if the compressed data for a file is less than or equal to 3802 bytes then it should be stored in the decmpfs attribute and there should be no resource fork, but Squeeze uses the resource fork to store the data anyway.
There is one other problem that concerns me much more however. I hope I'm wrong, but it appears that Squeeze does not support uncompressed data blocks for HFS+ compression. That by itself is not a serious problem (although it is a significant omission), rather the real problem is that it still tries to store compressed data blocks that are larger than the original uncompressed data blocks - this could result in a buffer overflow which may cause some files to become corrupted. Fortunately I think Squeeze validates compressed files before finalizing them, but there are some cases where having the large compressed data blocks appears to work and if Apple makes any changes to their HFS+ compression implementation there is a chance those files could become corrupted.