Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple should have stuck with ZFS years ago. I'll stay on El Capitain for a number of years because it works on most modern and previous Apple hardware.
I don't care for ZFS. All they had to do was make AFPS do checksumming, but for some reason they didn't. Funny that they provide encryption but not integrity protection.
 
  • Like
Reactions: JosephAW
Nope, I installed it on an external drive from the official release. It's just not the default option (or necessarily recommended.) Don't do it on a drive you want to use for your Time Machine backups, though....

Interesting. Thank you for the clarification/correction. :)
 
I have lost some important photo library backup due to this bug when trying to restore from disk image. WTF
 
With that attitude lets hope for more "one of Apple's great engineering successes"
 
Last edited by a moderator:
Does --converttoapfs NO no longer work?

Has not worked for me, possibly because the systems in question are managed units (Profile Manager). Trying to use that just throws up an error related to the license agreement.

Regardless, there should just be an option to do, or not to do. Apple is far from the only tech-company that assumes they always know better than the end-user. Like other companies, they are frequently wrong about this assumption. Providing defaults that work great for most people, but then providing accessible advanced options for others (not hidden command-line-only options), is the proper way to go.
 
Last edited:
well can can I say without being misunderstood or without creating any controversy, all software have bugs, vulnerabilities etc.

how do we know for sure that the problem is APFS or probably the problem is CCC sofware, I remember when High Sierra came out and CCC only had support for HFS+ then later they updated their software to support APFS, at that time CCC creator admitted that he has a little bit of problem understanding how APFS works because the lack of documentation from part of apple, maybe is time for CCC to use another image type instead of sparse disk image, is very simple to blame others, when apple changed their FS and CCC still want to use the same file type and technique that they being using for the longest, maybe that old technique is not compatible anymore with the new FS, I think is time for a change, update your software and stop blaming others.
 
I reverted back to Sierra using a Time Machine back up. Since using High Sierra I was not able to perform any back ups using Time Machine. Everything is working normal now. I think I will stay on Sierra for awhile.
 
*and the key thing is that the drive would have to be full as well. Not many people are using their system drive full whilst also trying to backup to a full system drive.

You'd basically need a HFS formatted network drive that you didn't pay any attention to the capacity off, using an APFS sparesdisk image to backup to on with CCC itself (or perhaps your own rsync script) and that disk would need to get full and then you'd reach the bug, but you wouldn't actually realise it until you'd come to get something from the sparsedisk image which was corrupt.

Also you'd expect in this kind of setup the person would at least have some kind of notification for when their NAS drive was full.

I am concerned that this bug reveals something inherently wrong with the files system, not just a bug.
I do have users that regularly eat up their personal drive space. I imagine something similar could occur if they simply tried to copy any large file that exceeded available drive space.
[doublepost=1523051884][/doublepost]
I don't care for ZFS. All they had to do was make AFPS do checksumming, but for some reason they didn't. Funny that they provide encryption but not integrity protection.
They don't do checksumming because copies are not copies. The new filesystem stores metadata about copies, not the actual file contents. That's why copies happen so quickly. Andf why they can do encryption because its ultimately just more metadata.

I don't trust apfs at all. Files corrupt very easily. Check it your self. make a copy of a very large file on the same apfs volume. Open and edit the original, interrupt the save operation (to same file not a new one) via power interruption. See what happens.
 
Last edited:
  • Like
Reactions: jovada
I am concerned that this bug reveals something inherently wrong with the files system, not just a bug.
I do have users that regularly eat up their personal drive space. I imagine something similar could occur if they simply tried to copy any large file that exceeded available drive space.

The bug is simply a false assumption between disk images vs. APFS. Both sides assume the other side is responsible for ensuring that enough space is available. For physical APFS volumes, this isn't a problem, because the container handles that.

(Doesn't mean there aren't additional bugs, but this one seems fairly easy to explain and doesn't suggest a deeper issue to me.)

They don't do checksumming because copies are not copies.

They don't do data checksumming for performance/wear reasons.
 
  • Like
Reactions: fairuz
They don't do checksumming because copies are not copies. The new filesystem stores metadata about copies, not the actual file contents. That's why copies happen so quickly. Andf why they can do encryption because its ultimately just more metadata.

I don't trust apfs at all. Files corrupt very easily. Check it your self. make a copy of a very large file on the same apfs volume. Open and edit the original, interrupt the save operation (to same file not a new one) via power interruption. See what happens.
I don't see why the referential copy is a problem for checksumming. Wherever data of any kind is written to disk, they can store a checksum with it.
[doublepost=1523054723][/doublepost]
They don't do data checksumming for performance/wear reasons.
This is what I assumed. I wish there were an option to enable checksumming, though. The performance hit is worth it for anything critical, and I'd rather not do this manually.
 
I am concerned that this bug reveals something inherently wrong with the files system, not just a bug.
I do have users that regularly eat up their personal drive space. I imagine something similar could occur if they simply tried to copy any large file that exceeded available drive space.

No it can't or that would be the bug reported (a much more dangerous one). This is a rare edge case which is why it wasn't caught. :rolleyes:
 
The bug is simply a false assumption between disk images vs. APFS. Both sides assume the other side is responsible for ensuring that enough space is available. For physical APFS volumes, this isn't a problem, because the container handles that.

(Doesn't mean there aren't additional bugs, but this one seems fairly easy to explain and doesn't suggest a deeper issue to me.)



They don't do data checksumming for performance/wear reasons.

Checksumming doesn't cause any wear. It doesn't check sum because copies are not copies and can't be checksummed. You can't checksum metadata.
This may yield greater performance, but that is besides the point.
https://developer.apple.com/library.../Conceptual/APFS_Guide/Features/Features.html
 
It doesn't check sum because copies are not copies and can't be checksummed.

So, you end up with the same checksum twice. Or you checksum once and know the same checksum will apply twice. So what?

You can't checksum metadata.

And yet APFS does exactly that. As do some others: both ext4 and XFS are listed there as only checksumming the metadata.

This may yield greater performance, but that is besides the point.
https://developer.apple.com/library.../Conceptual/APFS_Guide/Features/Features.html

No, it's precisely the point, and your link doesn't even mention "ECC" or "checksum" or anything related. And their FAQ is unfortunately disingenuous:

What has Apple done to ensure the reliability of my data?

Apple products are designed to prevent data corruption and protect against data loss.

To protect data from hardware errors, all Flash/SSD and hard disk drives used in Apple products use Error Correcting Code (ECC). ECC checks for transmission errors, and when necessary, corrects on the fly. Apple File System uses a unique copy-on-write scheme to protect against data loss that can occur during a crash or loss of power. And to further ensure data integrity, Apple File System uses the Fletcher's checksum algorithm for metadata operations.

That's not a complete answer. The complete answer is: Apple had to make a trade-off, and they believe not checksumming the entire data was the appropriate decision to make, because 1) they didn't see significant dataloss in their statistics that checksums could have mitigated, and 2) the downsides of checksumming data were too large. So that's what they went with.
 
  • Like
Reactions: oxfordrunner
They don't do data checksumming for performance/wear reasons.

The drive itself of course does calculate a checksum or error correcting code. Data corruption on the storage medium will thus be detected by the drive. A separate checksum by the file system merely protects against communication errors as data is transferred between the computer and the drive. If this was a problem, there are alternate solutions that don't require the file system to store checksums - but this may be pointless on a system that (usually) doesn't have ECC memory.
 



Apple's APFS file system included in macOS High Sierra suffers from a disk image vulnerability that in certain circumstances can lead to data loss, according to the creator of Carbon Copy Cloner.

In a blog post last Thursday, software developer Mike Bombich explained that he had uncovered the data writing flaw in the Apple File System, or APFS, through his regular work with "sparse" disk images.

macos-volumes-icones-800x277.jpg

For those who aren't familiar with the term, a sparse disk image is basically a file that macOS mounts on the desktop and treats as if it was a physically attached drive with a classic disk volume structure. The flexibility of sparse disk images means they are commonly used in the course of performing backup and disk cloning operations, hence Bombich's extensive experience with them.
Two related problems are identified by Bombich, above. The first is that the free space on the APFS-formatted sparse disk image doesn't update as it should when the free space on the underlying physical host disk is reduced. The second problem is the lack of error reports when write requests fail to dynamically grow the disk image, resulting in data being "written" into a void. Bombich tracks both bugs back to macOS's background "diskimages-helper" application service, which he has since reported to Apple.

Bombich's video demonstrating the APFS bug

Every installation of High Sierra on Macs with all-flash storage converts the existing file system to APFS, which is optimized for modern storage systems like solid-state drives. However, as Bombich notes, ordinary APFS volumes like SSD startup disks are not affected by the problem described above, so the vast majority of users won't be affected by it - the flaw is most applicable when making backups to network volumes. Bombich says Carbon Copy Cloner will not support AFPS-formatted sparse disk images until Apple resolves the issue.

The APFS flaw follows the discovery of another bug in Apple's operating systems that received extensive coverage last week. That bug is induced by sending a specific character in the Indian language Telugu, which causes certain apps on iPhones, iPads, and Macs to freeze up and become unresponsive. The Telugu character bug has already been fixed in Apple's upcoming iOS 11.3 and macOS 10.13.4 software updates.

Article Link: APFS Bug in macOS High Sierra Can Cause Data Loss When Writing to Disk Images
[doublepost=1523643382][/doublepost]



Apple's APFS file system included in macOS High Sierra suffers from a disk image vulnerability that in certain circumstances can lead to data loss, according to the creator of Carbon Copy Cloner.

In a blog post last Thursday, software developer Mike Bombich explained that he had uncovered the data writing flaw in the Apple File System, or APFS, through his regular work with "sparse" disk images.

macos-volumes-icones-800x277.jpg

For those who aren't familiar with the term, a sparse disk image is basically a file that macOS mounts on the desktop and treats as if it was a physically attached drive with a classic disk volume structure. The flexibility of sparse disk images means they are commonly used in the course of performing backup and disk cloning operations, hence Bombich's extensive experience with them.
Two related problems are identified by Bombich, above. The first is that the free space on the APFS-formatted sparse disk image doesn't update as it should when the free space on the underlying physical host disk is reduced. The second problem is the lack of error reports when write requests fail to dynamically grow the disk image, resulting in data being "written" into a void. Bombich tracks both bugs back to macOS's background "diskimages-helper" application service, which he has since reported to Apple.

Bombich's video demonstrating the APFS bug

Every installation of High Sierra on Macs with all-flash storage converts the existing file system to APFS, which is optimized for modern storage systems like solid-state drives. However, as Bombich notes, ordinary APFS volumes like SSD startup disks are not affected by the problem described above, so the vast majority of users won't be affected by it - the flaw is most applicable when making backups to network volumes. Bombich says Carbon Copy Cloner will not support AFPS-formatted sparse disk images until Apple resolves the issue.

The APFS flaw follows the discovery of another bug in Apple's operating systems that received extensive coverage last week. That bug is induced by sending a specific character in the Indian language Telugu, which causes certain apps on iPhones, iPads, and Macs to freeze up and become unresponsive. The Telugu character bug has already been fixed in Apple's upcoming iOS 11.3 and macOS 10.13.4 software updates.

Article Link: APFS Bug in macOS High Sierra Can Cause Data Loss When Writing to Disk Images
I am not convinced the bug is limited to disk full situations. I just lost a weeks worth of work on a mounted sparsebundle image. The disk has 900GB free, and image has about 1GB free. The situation developed where PREVIEW froze. It couldn't Quit of Force Quit. The only option was to do a hard shutdown. I neglected to unmount the disk image, which has not generally been a problem. When I restarted, the image was reinstated to the last time the computer was shutdown. All work was lost. I think the issue runs deeper than expected. I'm now trying to figure out the best solution for reverting back to earlier, more stable times. Or, heaven forbid, moving to Windows. Anyone have a suggestion??
 
I think the issue runs deeper than expected. I'm now trying to figure out the best solution for reverting back to earlier, more stable times. Or, heaven forbid, moving to Windows. Anyone have a suggestion??

Can’t you simply make your sparse image use HFS+?
 
Can’t you simply make your sparse image use HFS+?
That's a great reminder! When I created the image, I almost went that direction. I remember researching all the options and APFS sparsebundle to save space, auto grow, and even options to collapse seemed too incredible... I guess it was! Thanks for the reminder!
 
Is this problem really fixed?!

I just noticed that I lost space and data on my disk, that is NOT good! Fortunately I have a backup...

I have Mojave at the last version – I made a clean install.
 
Is this problem really fixed?!

I just noticed that I lost space and data on my disk, that is NOT good! Fortunately I have a backup...

I have Mojave at the last version – I made a clean install.

That is a good question. With the introduction of Mojave, Time Machine now backs up data using APFS volumes. This was a huge improvement over HFS+because Time Machine can now take full advantage of APFS' ability to dynamically resize the data snapshots used as the basis for the TM backup. This tells me that Apple continues to develop and improve their APFS within macOS.

Apple also released their APFS Documentation on 9-17-2018 giving developers and consumers even greater insight into APFS' layered design and partitioning system for storing your metadata. So we are definitely moving in the right direction.

Since Mojave, I have not experienced any issues with data lost. After your clean install, did you run disk utility to force the repair permissions?

I forgot to include the link to Apple's latest documentation on APFS. Here it is for your consideration:

https://developer.apple.com/support/apple-file-system/Apple-File-System-Reference.pdf
 
Last edited:
That is a good question. With the introduction of Mojave, Time Machine now backs up data using APFS volumes. This was a huge improvement over HFS+because Time Machine can now take full advantage of APFS' ability to dynamically resize the data snapshots used as the basis for the TM backup. This tells me that Apple continues to develop and improve their APFS within macOS.

Apple also released their APFS Documentation on 9-17-2018 giving developers and consumers even greater insight into APFS' layered design and partitioning system for storing your metadata. So we are definitely moving in the right direction.

Since Mojave, I have not experienced any issues with data lost. After your clean install, did you run disk utility to force the repair permissions?

I forgot to include the link to Apple's latest documentation on APFS. Here it is for your consideration:

https://developer.apple.com/support/apple-file-system/Apple-File-System-Reference.pdf

I tried everything for to fix it without results.

I formated the hard drive as HFS+.
 
Is this problem really fixed?!

I just noticed that I lost space and data on my disk, that is NOT good! Fortunately I have a backup...

I have Mojave at the last version – I made a clean install.

Your problem doesn't sound related to this one at all.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.