This is only my opinion, but I believe you are being incredibly dramatic. The sky has not fallen. Even the largest storage configurations for the vast majority of macOS machines are still fairly small in the grand scheme. If people are keeping their most important data there and not utilizing the iCloud features and/or Time Machine...that's on them for putting all their eggs in one basket. You speak as if you know for a fact that there was no reason for them to omit what you wish they hadn't...when you clearly do not have access to any information to back that claim. Point being...get a grip.
A consumer mechanical hard disk will turn up one unrecoverable bit per 12TB of reads. That is listed on the manufacturer's data sheet and it is seen as normal. And that's just the tip of the iceberg. You can't tell me how many corrupt files you have on your disk. Period. That fact alone should terrify you. Depending on how old your Mac is, how old your data is, and how much data you have... it could be nearly a certainty that you have corrupt files on it. You just have no idea. Ignorance is bliss I guess? Hopefully it's some useless stuff, not something you care about.
ZFS was a path that Apple considered taking until the situation was made problematic by licensing issues. Full feature set of ZFS requires considerable resources - and lots of memory. Obviously things like de-dup features could be turned off and the memory requirements drop considerably. Apple is not currently in the server market, so many of the features of ZFS would be overkill.
HFS+ reached maturity long ago and has been feature complete.
The APFS still has a rather long feature list of stuff to implement, and by the time it is feature complete.... hard drives will likely be less and less of an installed base for this technology. Data integrity is important and having built in smarts to know when data has suffered "bit rot" is valuable -- I have yet to run into this issue on my SSD so I am not sure what form it takes. BUT, both the APFS and the SSD technology are both maturing -- and I am not sure if the data integrity will be more of a hardware feature set or a software feature set once everything has matured. Only time will tell, but things are now moving pretty quickly.
People keep talking about a brand or a specific file system (ZFS), but I'm talking about a feature: data integrity. Every major OS has a data integrity story except Apple. Microsoft has ReFS, Linux has BTRFS, Ubuntu has ZFS, Solaris has ZFS. Why doesn't Apple have a data integrity story in their new file system? I can't think of a good reason.
Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.
Never. Happened. Once.
(I'm on the Mac since the 7100).
If you're on the Mac since the 7100 and you have been moving your data forward since then, it has absolutely happened to you. You just have no idea. Data size, transfer count, and time. I personally have songs that have acquired blips, JPEGs that have acquired some weirdness, and video files with errant blocks that didn't used to be there. It happens if you know what to look for.
All modern HDDs and SSDs use error correcting codes (ECC), which makes additional error correction superfluous.
I quote myself from above: Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.
I disagree with your take and understanding of the mater. As someone understanding end to end data checks, the impact is not infinitesimal but real and very measurable. Depending on application, what you are saying is true. Servers? APFS is not there but it is not designed for that goal. It is designed for purely consumer based applications.
In this mindset, Apple made decisions (that I mostly understand) saying for the extremely RARE (and it is a rare, infinitesimal as you would say) event for a corrupted file and they will rely on file backups for recovery.
Protect the structure of the drive (meta data) and rely on redundant copies (backups) for the user data. If the user cares about their data, they will have backups.
From a consumer standpoint, I understand this mindset.
If you understand end to end data checks then you understand how much time the CPU spends waiting for data from the disk. Even on modern very fast PCIe SSDs, the CPU is doing a lot of idling while the data streams in. Modern CPUs are so fast (even the Core M stuff in the MacBooks) that these kinds of workloads are just not a big deal.
The idea that backups will save you from file corruption is hilarious. Corruption is systemic, if a user had backups the backups will almost certainly contain the same corruption. If you have no way of identifying its existence, it just moves around the data set until all copies are corrupt. Then you open it, find that it is corrupt, go to your backup, it is also corrupt. Sad face. Additionally APFS removes the ability to easily make a duplicate of the file. So you copy a file to try and preserve it, but APFS just makes a thin file that points to the same blocks as the original. So now when one copy is corrupt, both copies are corrupt. Have fun with that.
Corruption in files (bit rot in some cases, hardware issues in others) is much more likely than people seem to think it is. I would bet good money that plenty of folks in this forum (including some arguing with me now) have corrupt files. Unfortunately they have no way to check. Disks are getting bigger, not smaller. The need for data integrity is increasing, not decreasing. Why is everybody arguing against a feature that is obviously helpful and good for everybody?