Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I plan to not upgrade to high Sierra due to a recent disastrous iTunes release. I cannot lose the feature new version is missing.

I do have a time machine backup, albeit not so recent.

Thanks for your suggestions.
You're one hardware failure away from losing every "critical file" created or modified since that backup. Upgrade or not, but if your files are as critical as all that, you might want to plug in that time machine drive for a while?
 
Last edited:
You're one hardware failure away from losing every "critical file" created or modified since that backup. Upgrade or not, but if your files are as critical as all that, you might want to plug in that time machine drive for a while? Or not, up to you!
If your files are important, you should have not only time machine....

But something like a recent complete backup (Carbon Copy Cloner) and you should always have at least 3 copies -- with one being offsite.
 
Personally, I'm disappointed they aren't going for Microsoft's jugular with better built-in productivity tools.

Apple's focus has always been on the consumer, but now that iOS has made inroads into the enterprise, maybe you're right; maybe it's time to for Apple to release an iWorks pro and make some serious inroads.

Not only am I tired of MS Office constantly crashing my Mac, but it would also be a good strategy since it's easier to replace the Mac than Office.
 
  • Like
Reactions: sudo1996
This is only my opinion, but I believe you are being incredibly dramatic. The sky has not fallen. Even the largest storage configurations for the vast majority of macOS machines are still fairly small in the grand scheme. If people are keeping their most important data there and not utilizing the iCloud features and/or Time Machine...that's on them for putting all their eggs in one basket. You speak as if you know for a fact that there was no reason for them to omit what you wish they hadn't...when you clearly do not have access to any information to back that claim. Point being...get a grip.
A consumer mechanical hard disk will turn up one unrecoverable bit per 12TB of reads. That is listed on the manufacturer's data sheet and it is seen as normal. And that's just the tip of the iceberg. You can't tell me how many corrupt files you have on your disk. Period. That fact alone should terrify you. Depending on how old your Mac is, how old your data is, and how much data you have... it could be nearly a certainty that you have corrupt files on it. You just have no idea. Ignorance is bliss I guess? Hopefully it's some useless stuff, not something you care about.

ZFS was a path that Apple considered taking until the situation was made problematic by licensing issues. Full feature set of ZFS requires considerable resources - and lots of memory. Obviously things like de-dup features could be turned off and the memory requirements drop considerably. Apple is not currently in the server market, so many of the features of ZFS would be overkill.

HFS+ reached maturity long ago and has been feature complete.

The APFS still has a rather long feature list of stuff to implement, and by the time it is feature complete.... hard drives will likely be less and less of an installed base for this technology. Data integrity is important and having built in smarts to know when data has suffered "bit rot" is valuable -- I have yet to run into this issue on my SSD so I am not sure what form it takes. BUT, both the APFS and the SSD technology are both maturing -- and I am not sure if the data integrity will be more of a hardware feature set or a software feature set once everything has matured. Only time will tell, but things are now moving pretty quickly.
People keep talking about a brand or a specific file system (ZFS), but I'm talking about a feature: data integrity. Every major OS has a data integrity story except Apple. Microsoft has ReFS, Linux has BTRFS, Ubuntu has ZFS, Solaris has ZFS. Why doesn't Apple have a data integrity story in their new file system? I can't think of a good reason.

Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.

Never. Happened. Once.

(I'm on the Mac since the 7100).
If you're on the Mac since the 7100 and you have been moving your data forward since then, it has absolutely happened to you. You just have no idea. Data size, transfer count, and time. I personally have songs that have acquired blips, JPEGs that have acquired some weirdness, and video files with errant blocks that didn't used to be there. It happens if you know what to look for.

All modern HDDs and SSDs use error correcting codes (ECC), which makes additional error correction superfluous.
I quote myself from above: Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.

I disagree with your take and understanding of the mater. As someone understanding end to end data checks, the impact is not infinitesimal but real and very measurable. Depending on application, what you are saying is true. Servers? APFS is not there but it is not designed for that goal. It is designed for purely consumer based applications.

In this mindset, Apple made decisions (that I mostly understand) saying for the extremely RARE (and it is a rare, infinitesimal as you would say) event for a corrupted file and they will rely on file backups for recovery.

Protect the structure of the drive (meta data) and rely on redundant copies (backups) for the user data. If the user cares about their data, they will have backups.

From a consumer standpoint, I understand this mindset.
If you understand end to end data checks then you understand how much time the CPU spends waiting for data from the disk. Even on modern very fast PCIe SSDs, the CPU is doing a lot of idling while the data streams in. Modern CPUs are so fast (even the Core M stuff in the MacBooks) that these kinds of workloads are just not a big deal.

The idea that backups will save you from file corruption is hilarious. Corruption is systemic, if a user had backups the backups will almost certainly contain the same corruption. If you have no way of identifying its existence, it just moves around the data set until all copies are corrupt. Then you open it, find that it is corrupt, go to your backup, it is also corrupt. Sad face. Additionally APFS removes the ability to easily make a duplicate of the file. So you copy a file to try and preserve it, but APFS just makes a thin file that points to the same blocks as the original. So now when one copy is corrupt, both copies are corrupt. Have fun with that.

Corruption in files (bit rot in some cases, hardware issues in others) is much more likely than people seem to think it is. I would bet good money that plenty of folks in this forum (including some arguing with me now) have corrupt files. Unfortunately they have no way to check. Disks are getting bigger, not smaller. The need for data integrity is increasing, not decreasing. Why is everybody arguing against a feature that is obviously helpful and good for everybody?
 
Wow that reads like it auto-converts HFS to APFS. I wonder how it does that? Create new APFS partition, copy from HFS to APFS, reformat HFS partition to APFS, delete partition to end up with one big APFS partition?

Similarly, I wonder about the relationship of this and Time Capsule? Does Time Capsule remain as is or would it be converted to APFS too?

Can external HFS drive be auto-updated to APFS using the same approach?

Lots of questions.

Watch this. Answers all of your questions. https://developer.apple.com/videos/play/wwdc2017/715/
 
I was initially excited for Metal 1, but it was a slow adoption rate and I could count on my left hand which games actually had it. It was an improvement in fps, but still 15 frames slower than DX.

I'm hoping Metal 2 will bring it within 5 frames of Windows 10. This is what it would take for me to ditch my Window 10 partition. However, I doubt it.
 
  • Like
Reactions: Mr. Retrofire
I've not looked at Photos since the 1st version drove me away to Lightroom. Have they finally figured out a way to allow two people on an Family iCloud account to share ALL of the photos in a library complete with full resolution and all metadata? That might bring me back.
 
If your files are important, you should have not only time machine....

But something like a recent complete backup (Carbon Copy Cloner) and you should always have at least 3 copies -- with one being offsite.
Yeah, I mentioned that in a previous post too.

Carbon Copy Cloner is awesome. I personally keep a monthly, encrypted CCC backup at my wife's office, in addition to local Time Machine drive. On top of that, all my important stuff is also backed up via iCloud or Dropbox. I learned this all the hard way when I my laptop and its backup drive were burgled from an old apartment...

Worst case scenario, my apartment is hit by a meteorite: I could recover everything from those offsite drives and then sync the rest from those cloud services. (Unless I was in the apartment at the time, in which case I'd be "uploaded to the cloud" myself.)
[doublepost=1497720515][/doublepost]
I've not looked at Photos since the 1st version drove me away to Lightroom. Have they finally figured out a way to allow two people on an Family iCloud account to share ALL of the photos in a library complete with full resolution and all metadata? That might bring me back.
It is a shame that shared albums in Photos downsize large images, because it's generally a very slick feature that works quite well across all Apple devices. Even if it did not resize things, though, I'm not sure it would easily satisfy your need to sync everything. The only way to really do what you want with Photos would be to use the same Apple ID for all the devices, and that would be madness.
 
Last edited:
A consumer mechanical hard disk will turn up one unrecoverable bit per 12TB of reads. That is listed on the manufacturer's data sheet and it is seen as normal. And that's just the tip of the iceberg. You can't tell me how many corrupt files you have on your disk. Period. That fact alone should terrify you. Depending on how old your Mac is, how old your data is, and how much data you have... it could be nearly a certainty that you have corrupt files on it. You just have no idea. Ignorance is bliss I guess? Hopefully it's some useless stuff, not something you care about.

That is not what that specification means in the slightest. Learn what it means and then come back.
 
Best thing about APFS? Duplicating a 40GB video in literally 3 seconds. :)
Wait what? That's faster than the disk's theoretical throughput. You sure it's not just creating a pointer to the original file?

Edit: I read that it doesn't actually copy it; just creates a pointer. And blocks are copied as needed if you modify one file. I can imagine this being very useful for version control, and I'm thinking of how they can improve Time Machine with this. So, if I copy to an external disk for transfer or backup purposes, does it *actually* copy it? Has to.
[doublepost=1497722765][/doublepost]
All modern HDDs and SSDs use error correcting codes (ECC), which makes additional error correction superfluous.
No it doesn't. What if the cable connecting to my SSD flips some bits, or the firmware does it? These kinds of checks are supposed to be end-to-end, both for practical and design reasons.
 
Last edited:
  • Like
Reactions: mudflap
Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.

You might be right, but the level of systems backup that you are talking about is can only be achieved with having completely redundant systems. If hardware cannot be trusted, then the software that is running on that hardware cannot be trusted.... but depending on the level of recoverability and uptime needed you have invest in the hardware only up to the level of protection that you need. In other words, if you cannot trust a hard drive to store data, you cannot trust the controller or the computer that the drive is stuck into... then there is the problem if you do have redundancy -- which one are you going to trust. Then you get to the problem of having to have something that has at least 3 of those and redundant ledgers of the results -- effectively having the majority overruling the minority with regards to which result is wrong. But then for your average home computing environment that level of protection becomes insane overkill and is not worth the extra cost. Apple hardware is considered expensive by the industry as a whole, and you can go overboard to the point of .... well paranoia. If I lose a single document, I lose some investment but I can rebuild. If I lose a single photo.... I lose it.... I move on. Even with a robust file system, you will still need to do at least 3 backups, logging of changes, off-siting (which 99% of people don't do - so if 99% don't even do that a robust file system is not going to fully protect them anyways) . As long as you are not in a situation where you don't have enough backups, copies of originals, a history of changes. It is always a balance though and APFS and the newer hardware does move the need... and if you let it just sit and "rot" no level of redundancy will save you.
 
That's obviously why it was named High Sierra, just as Snow Leopard was simply a refinement of Leopard and didn't bring a huge amount of innovative features. A lot of people on this forum got exactly what they were asking for, refinements and new technologies instead of adding on more features or changing up the GUI. But others still will complain about lack of innovation.

Evidently can't please everyone...

Agreed - not sure if I need something radical in my MacOS. Refinements that add speed and functionality are just fine by me. APFS is a great addition to the MacOS environment.

Additionally, I am happy with the improvement in Photos and Notes which are my key productivity tools. Recent upgrades to iWork 3 in 5 months were welcome too.

I think future upgrades to iWork and iLife applications should be the focus of future enhancements. Also AI integration into Mail and Calendar applications.

But as you said - not everyone will be happy.
 
I still have a little hope that it may be coming.

Nah. To move "digital life" apps like News and Podcasts to macOS is to intermingle the purpose of the iPad and Macs. Apple has no intention of giving up the higher margins of its laptop and iMac lines by providing the same level of functionality in its tablets.
 
So the new Mac Pro is basically going to be an external GPU expansion chassis with a modular head.
I doubt there is a new Mac Pro at this point. I think the iMac pro is what was hinted at. With External GPUs is as powerful as you want it to be now.
 
I doubt there is a new Mac Pro at this point. I think the iMac pro is what was hinted at. With External GPUs is as powerful as you want it to be now.
No, there is a "modular" Mac Pro in development. Apple has mentioned both -- that the iMac Pro would be out first, then the Mac Pro would follow (probably next year sometime - guessing first half). When the iMac Pro came out and one reporter tweeted is this what the new Mac Pro is, Apple sent back a message within 20 minutes of the announcement that THIS IS NOT the Mac Pro that they promised that this was something in the works before and the Mac Pro was still under development at this time.
 
What feature of HFS+ do you rely on? Or do you just not trust APFS yet?
Yup. I do not fully trust APFS. And, I am afraid converting to APFS would result losing my data, although previous posts suggest it wont.
 
ZFS has numerous features that APFS doesn't currently have, such as multiple compression methods (gzip, lz4, among others), the ability to clone snapshots into their own read/write volumes, deduplication which allows the filesystem to identify redundant data, and remove the copies (although most people can't use this feature due to its high RAM usage), the ability to send snapshots over a network, store them into a file, or basically whatever you want to do with them, and tons of other features that would take awhile to explain.

As to why Apple hasn't adopted it, they originally were going to. Due to licensing uncertainties from Sun's acquisition by Oracle, and a certain Not-Invented-Here syndrome, they ended up shelving the project. You can see it mentioned back on the old Mac OS X Snow Leopard Server page at the bottom: http://web.archive.org/web/20080721031014/http://www.apple.com/server/macosx/snowleopard/
The fruits of their labor have been salvaged by the o3x project which you can access here, if you're interested (although I've had stability problems with it, so I don't recommend it. I personally use ZFS inside a Linux VM): https://openzfsonosx.org
Dealing in any licensing with Oracle is toxic. Tim's not stupid enough to even try.

Besides that, most of the missing features from ZFS are really server-related. They'd be "nice to have" on workstations, but outside the Unix/Linux crowd nobody would know what to do with these features. Apple is solidly out of the server business, they don't really need to chase features like these because they are going for "computing appliances" and storing everything important on cloud servers now.
Simple fact: Your data is only useful if it is correct. Do you care about the photos of your kids if they are just rainbows of JPEG compression artifacts? Do you care about your music library if it is loaded with pops from bit flips? Are your spreadsheets useful if what you read back off disk has different numbers or unreadable numbers?

No. It is useless.

Not treating data integrity as the first and most important goal of the file system is borderline criminal from an engineering standpoint. It doesn't matter if it's fast, it doesn't matter if it's secure... if the data is wrong it just doesn't matter. I have heard rumblings that Apple did not include data integrity because they believe their hardware will not acquire errors. That is the height of hubris, things always break in unexpected ways, and we have untold examples of this spanning decades. To believe otherwise is so folly that people who believe that should probably be fired.

It is an embarrassment that APFS does not provide data integrity from day one. Being better than HFS+ is not enough. A modern file system must take responsibility for the vast amount of important data its users have.



I don't see how this isn't marketing nonsense. For some time now it seems that Apple has been refining their custom SSD firmware. There are most likely optimizations they can take advantage of between APFS and Apple-shipped SSDs. However, to put credence into the idea that APFS is "designed for SSDs" in some way that makes it better than ZFS is a serious long shot. I'd need to see some very compelling proof before I'd believe that. And even then it wouldn't matter because it makes no difference how fast your file system is if you don't know whether or not your data is correct.


Entirely incorrect. ZFS is completely meaningful in all situations. If your data is wrong it simply doesn't matter. ECC memory allows you to be pretty sure that what came out of memory was correct. There are many other points on the chain where failures can accumulate. Maybe there are issues with the platform controller firmware, or some electrical issue, or storage controller issues. ECC is good, ALL computers should use ECC today. I won't build another computer without ECC ever again. But ZFS and other file systems with data integrity (ReFS, BTRFS) are a critical step towards understanding whether or not you are storing garage.
Ultimately ZFS anywhere (other potential OSes as well) officially was dead as doornails the minute Sun was acquired by Oracle. Oracle is a famously bad company to partner with and to license stuff from... even worse than Microsoft. There's no way Tim would "hitch his wagon" to that horse for ten-year-plus term.
 
Yup. I do not fully trust APFS. And, I am afraid converting to APFS would result losing my data, although previous posts suggest it wont.
There is always possibilities of edge cases that might pop up on a more complex device like a Mac, which is why it would be important to actually do backups the way they are suppose to be done. The odds are not that great, probably not greater than a hard drive failure itself -- but it is best to ALWAYS do proper backups (3 copies at least).

If the risk of upgrading to APFS is a problem, I would also be wary of upgrading to High Sierra (or any new OS version) for at least 3 months while you wait for the inevitable little bugs to be addressed and you are sure it is not going to be a problem.
 
Last edited:
I wish APFS was a good enough replacement for ZFS, and I could go with something more integrated with the system for everything. Unfortunately, it doesn't come close. ZFS is still the best. Here's hoping for future improvements! :)

EDIT: Well, unless encryption is important enough to you to forego most of the ZFS feature set.

Most of ZFS functionality is really geared toward servers versus workstations. For my data encryption is very important.
 
A consumer mechanical hard disk will turn up one unrecoverable bit per 12TB of reads. That is listed on the manufacturer's data sheet and it is seen as normal. And that's just the tip of the iceberg. You can't tell me how many corrupt files you have on your disk. Period. That fact alone should terrify you. Depending on how old your Mac is, how old your data is, and how much data you have... it could be nearly a certainty that you have corrupt files on it. You just have no idea. Ignorance is bliss I guess? Hopefully it's some useless stuff, not something you care about.


People keep talking about a brand or a specific file system (ZFS), but I'm talking about a feature: data integrity. Every major OS has a data integrity story except Apple. Microsoft has ReFS, Linux has BTRFS, Ubuntu has ZFS, Solaris has ZFS. Why doesn't Apple have a data integrity story in their new file system? I can't think of a good reason.

Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.


If you're on the Mac since the 7100 and you have been moving your data forward since then, it has absolutely happened to you. You just have no idea. Data size, transfer count, and time. I personally have songs that have acquired blips, JPEGs that have acquired some weirdness, and video files with errant blocks that didn't used to be there. It happens if you know what to look for.


I quote myself from above: Never trust hardware. Ever. Hardware deals in analog signaling and analog switching. It lies to you. Application programmers have the luxury of trusting the abstraction. Kernel, driver, firmware, and file system programmers do not. The idea that hardware error checking will save us is crazy.


If you understand end to end data checks then you understand how much time the CPU spends waiting for data from the disk. Even on modern very fast PCIe SSDs, the CPU is doing a lot of idling while the data streams in. Modern CPUs are so fast (even the Core M stuff in the MacBooks) that these kinds of workloads are just not a big deal.

The idea that backups will save you from file corruption is hilarious. Corruption is systemic, if a user had backups the backups will almost certainly contain the same corruption. If you have no way of identifying its existence, it just moves around the data set until all copies are corrupt. Then you open it, find that it is corrupt, go to your backup, it is also corrupt. Sad face. Additionally APFS removes the ability to easily make a duplicate of the file. So you copy a file to try and preserve it, but APFS just makes a thin file that points to the same blocks as the original. So now when one copy is corrupt, both copies are corrupt. Have fun with that.

Corruption in files (bit rot in some cases, hardware issues in others) is much more likely than people seem to think it is. I would bet good money that plenty of folks in this forum (including some arguing with me now) have corrupt files. Unfortunately they have no way to check. Disks are getting bigger, not smaller. The need for data integrity is increasing, not decreasing. Why is everybody arguing against a feature that is obviously helpful and good for everybody?
Microsoft only supports ReFS on servers right now. Ubuntu uses ZFS only by the grace of Oracle who could cancel permission at any time. BTRFS on Linux is still in late-beta where it's 90% great but certain configurations are supported with "it Might explode" level support. Effectively zero consumer devices have such support you're demanding right now. And we've done fine so far.

ZFS is effectively dead software now (and Solaris as well) unless Oracle changes its ways because we have draconian IP laws. The era of companies with tens of millions to devote to pursuing arcane pure data science died with SUN. "A few mistakes in a billion" is good enough for 95% of people. That's all that the bean counters care about.
 
  • Like
Reactions: Shirasaki
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.