Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Microsoft only supports ReFS on servers right now. Ubuntu uses ZFS only by the grace of Oracle who could cancel permission at any time. BTRFS on Linux is still in late-beta where it's 90% great but certain configurations are supported with "it Might explode" level support. Effectively zero consumer devices have such support you're demanding right now. And we've done fine so far.

ZFS is effectively dead software now (and Solaris as well) unless Oracle changes its ways because we have draconian IP laws. The era of companies with tens of millions to devote to pursuing arcane pure data science died with SUN. "A few mistakes in a billion" is good enough for 95% of people. That's all that the bean counters care about.

Neither of those file systems have a license that is compatible with Apple at this point, BTRFS because the licenses are incompatible, and ZFS because Apple would not be able to work out an acceptable deal with Oracle. Wishing for what cannot be, is just a waste.
[doublepost=1497732544][/doublepost]
Really hope they start the tick tock cycle with MacOS again. Leopard/Snow Leopard, Lion/Mt Lion etc...

I am fine with just having tocks annually. You could completely dis-entagle application support such as Safari, iTunes, Mail, etc. from the Annual releases anyways (the only reason they sync them is because they need something to throw to the masses that would never get excited by ARKit, CoreML or APFS.
 
I doubt there is a new Mac Pro at this point. I think the iMac pro is what was hinted at. With External GPUs is as powerful as you want it to be now.

Who really knows? My opinion: The top iMac Pro configs topping out at 5 figures ($17k) pack an incredible amount of horsepower. The dGPU will be the only part that will become obsolete rather quickly and eGPU will take care of that problem especially considering it is a desktop after all. It will take at least a decade before this kind of performance becomes mainstream imo. 18-core Xeon, 128GB DDR4 ECC ram, 4 TB flash SSD, TB3, 10Gb Ethernet. That is pretty future proof perhaps not for the most demanding professionals (3-4 years maybe) but for the rest of us it definitely is. I see DDR5 ram, perhaps TB4 and faster SSDs for sure but how much of an improvement they will bring remains to be seen. 8K+ video would have to become the new standard and we're still a long way off from 4K becoming a broadcast standard if ATSC is not revised. These formats are really only used for professional film/tv production/mastering and streaming.
[doublepost=1497733662][/doublepost]
Yes, an iMac Pro with an eGPU connected via a dongle. :D.... .:(

It's not like the iMac would be moving from its desk. eGPUs are more of a hassle for notebooks but again they would only be used at home or work connected to a couple of large external displays anyway. At least Apple has finally adopted a solution for the weakest link that has plagued every single Mac. I can't wait to get an enclosure next year with a decent $5-600 nVidia card.
 
Snip ...A consumer mechanical hard disk will turn up one unrecoverable bit per 12TB of reads. That is listed on the manufacturer's data sheet and it is seen as normal. And that's just the tip of the iceberg. You can't tell me how many corrupt files you have on your disk. Period. That fact alone should terrify you. Depending on how old your Mac is, how old your data is, and how much data you have... it could be nearly a certainty that you have corrupt files on it. You just have no idea. Ignorance is bliss I guess? Hopefully it's some useless stuff, not something you care about. .../snip

If it's that bad then tell me why my OS has always worked since the *beginning.

*The beginning= I started using OS X right from the beginning (10.0.b), I never ever reinstalled yet my system runs still as before, healthy.
I am sure that my system used way more than 12 TB in that period, yet I never ran into problems.

If it's just one single bit that means that one of my photo's just misses less than a pixel, so what!
 
Last edited:
So you would pay 5000$ to put another box 1 year after release.
eGPU is cool for the macbooks, not for an iMac...
The GPU in the iMac will be faster than any dedicated thunderbolt 3 card since thunderbolt is limited.
 
It is a shame that shared albums in Photos downsize large images, because it's generally a very slick feature that works quite well across all Apple devices. Even if it did not resize things, though, I'm not sure it would easily satisfy your need to sync everything. The only way to really do what you want with Photos would be to use the same Apple ID for all the devices, and that would be madness.
It shouldn't be that hard to share all photos with my wife - but it sure is. We each have a MBP, iPhone and iPad - we want a shared library that auto-syncs when we add photos on any of those devices but we do not want to share an Apple ID because we don't want to share contact lists, browser bookmarks and other iCloud stuff. This shouldn't be that hard!
 
Most of ZFS functionality is really geared toward servers versus workstations. For my data encryption is very important.
Seems the biggest complaint is that ZFS has data integrity while APFS does not. The average user can definitely benefit from that, assuming there's not too big of a performance penalty (but maybe there is?). Funny thing is encryption is usually in tandem with checksums.
 
Let us hope that the price is closer to $5 rather than $600 :rolleyes:

I was talking about the price for a good nVidia GPU alone and that's how much a brand new good mid-range card costs today. The high end is now $1k+. All in all it's probably $1k total for a decent enclosure and GPU since good enclosures themselves cost anywhere from $2-400.
[doublepost=1497735639][/doublepost]The one Apple is selling developers now costs
$600 and includes just an RX580 which is really nothing to write home about. I'm talking about something like a 1080ti.
 
Yup. I do not fully trust APFS. And, I am afraid converting to APFS would result losing my data, although previous posts suggest it wont.
That's reasonable. I also like to let millions of consumers test stuff for me before I use it. Apple has tested the FS itself, but what if some weird use case has a problem with it?
[doublepost=1497735813][/doublepost]
If it's that bad then tell me why my OS has always worked since the *beginning.

*The beginning= I started using OS X right from the beginning (10.0.b), I never ever reinstalled yet my system runs still as before, healthy.
I am sure that my system used way more than 12 TB in that period, yet I never ran into problems.

If it's just one single bit that means that one of my photo's just misses less than a pixel, so what!
I'd not worry about 1 bit error per 12 TiB, but there are other failure cases. I've seen many weird disk-related failures in Macs, not necessarily with the disk itself but maybe the controller. It was scary because stuff would randomly stop working, but nothing would show any errors, including Disk Utility (fsck) and Apple's hardware test.
 
Last edited:
That's reasonable. I also like to let millions of consumers test stuff for me before I use it. Apple has tested the FS itself, but what if some weird use case has a problem with it?

Millions of consumers have already test driven APFS. The transition on iOS went incredibly smoothly. I wouldn't expect less on the Mac. I probably still will perform a High Sierra clean install, then transfer my content back to my mac from HFS+ formatted external drives, and start a fresh TC backup to avoid any issues. A clean install is a pretty good idea with any new OS anyway, especially if such a big change is being introduced.
 
  • Like
Reactions: Logan Six
Seems the biggest complaint is that ZFS has data integrity while APFS does not. The average user can definitely benefit from that, assuming there's not too big of a performance penalty (but maybe there is?). Funny thing is encryption is usually in tandem with checksums.

Depends, if there's a substantial performance hit then I guess Apple made the right decision not to include data integrity.

I'd not worry about 1 bit error per 12 TiB, but there are other failure cases. I've seen many weird disk-related failures in Macs, not necessarily with the disk itself but maybe the controller. It was scary because stuff would randomly stop working, but nothing would show any errors, including Disk Utility (fsck) and Apple's hardware test.

I had 1 HDD fail, but I am aware unlike most users, I always have a recent backup.
Millions of consumers have already test driven APFS. The transition on iOS went incredibly smoothly. I wouldn't expect less on the Mac. I probably still will perform a High Sierra clean install, then transfer my content back to my mac from HFS+ formatted external drives, and start a fresh TC backup to avoid any issues. A clean install is a pretty good idea with any new OS anyway, especially if such a big change is being introduced.


Nonsense, don't spread FUD.
 
I had 1 HDD fail, but I am aware unlike most users, I always have a recent backup.
But how do you know your backup doesn't have corrupted files that you're restoring? You don't know when your drive started to fail, only when it became unusable, so there can be random corrupted stuff.
[doublepost=1497736614][/doublepost]
Millions of consumers have already test driven APFS. The transition on iOS went incredibly smoothly. I wouldn't expect less on the Mac. I probably still will perform a High Sierra clean install, then transfer my content back to my mac from HFS+ formatted external drives, and start a fresh TC backup to avoid any issues. A clean install is a pretty good idea with any new OS anyway, especially if such a big change is being introduced.
macOS is used differently than iOS. People are managing files manually, running unsandboxed third-party software, performing huge read/write operations on TiB of data, etc. Stuff can go wrong, not necessarily on Apple's end. Example: There was an OS X update, Mavericks, that WD's RAID software had issues with, resulting in the entire startup disk being wiped. I minimize how much third-party software I run, but still.
[doublepost=1497737129][/doublepost]With this eGPU stuff, are we going to be able to connect eGPUs to drive displays? Because it just looks like they're only supporting them for computation purposes, e.g. machine learning.
 
I had 1 HDD fail, but I am aware unlike most users, I always have a recent backup.

I wish I could count the drive failures I have had on one hand. I added 4 or 5 more to this stack this year alone - with probably 2 more beginning to show the first signs of nearing their end.
harddrives_zpsvdm6020g.jpg


Nonsense, don't spread FUD.
I actually do clean installs for every version, usually taking the chance to image the old one and then doing a fresh install - which makes me make sure I can still find the licenses, software etc. as well clean out stuff that is sitting in different temporary directories that were long forgotten etc. Call it annual house cleaning.

The APFS will of course would not be exercised, and not in the same ways on other devices - both in drive manufacturers and of course the desktop OS being a more wild and free environment to do things you just don't get to do on locked down devices.... more chances of running into defects.... that may not have been run into before. I once ran a "batch process" that went through an entire banks transactional database after regression testing.... for a process that was as potentially dangerous as a file system swap.... out of all transactions on every single account.... there was ONE lonely transaction that caused the entire system to abend.... so.... you can never be sure.
 
  • Like
Reactions: Mr. Retrofire
Nonsense, don't spread FUD.

lec0rsaire has a valid reason for distrusting Apple's upgrade process, just as Shirasaki does for distrusting High Sierra's automatic APFS conversion: the voices of millions of bytes of data crying out at once and then being silenced by past OSX upgrades. Apple is notorious for failing to adequately test their data migration code when transitioning to new versions of their OSes and apps.

The decision to perform rigorous, and large scale, testing of the APFS migration code speaks volumes of the realization that engineering quality doesn't need to be sacrificed in the pursuit of human-centered design. Indeed, engineering that services the needs and expectations of the user, user-centric design, rather than what's convenient for Apple's programmers should be the norm instead of what the userbase has endured for years from Apple. And this includes maintaining support for 32-bit applications, unlocking the Mac screen for iPhones not just the Apple Watch, supporting other devices in HomeKit, etc.
 
Apple is notorious for failing to adequately test their data migration code when transitioning to new versions of their OSes and apps.

...no they aren't.

Indeed, engineering that services the needs and expectations of the user, user-centric design, rather than what's convenient for Apple's programmers should be the norm instead of what the userbase has endured for years from Apple.

As a single tear rolls down his cheek and a tiny violin plays softly in the corner of the room.

-------------------

A few are in near hysterics, but all signs thus far point towards that being complete nonsense.
 
APFS was designed and written ten years later than ZFS. With the feature set of iOS and MacOS users in mind. A very different audience from ZFS. Instead of making some claims without any evidence, please enlighten us and tell us in which ways ZFS is better for MacOS users.

And then there's the little problem that ZFS is a legal nightmare which alone will have kept Apple far away from it.
[doublepost=1497680129][/doublepost]

I'd say there is nothing to worry about because Apple has already upgraded hundreds of millions of iOS devices from HFS+ to APFS without anyone complaining. (On the other hand, yes, something like that, Apple needs to be able to upgrade hard drives that are as close to full as you'd want to be, say 95%, but not drives that are 100% full, so there must be no data duplication, and it must be done in such a way that if you have a power failure 95% through the conversion, everything is safe).
Trying to explain file systems to people takes articles of text. I’m not doing it. I’ve already mentioned several benefits, each of which need their own explanations. Just do a google search if you’re curious. There have been countless articles written on the subject.
[doublepost=1497747702][/doublepost]
ZFS is meaningless in the context of Apple devices, where less than 1% has ECC memory and more than one hard disk.

ZFS is for servers, where 100% has ECC memory.

Also, as Apple said, their SSDs have ECC memory and do wear leveling, error correction, etc. in order that you don't need it and also they aren't obviously susceptible of mechanical damage.
You don’t need ECC memory with ZFS. That’s a claim which has been debunked already, and doesn’t even make sense if you know how the file system works. True it was designed for servers and data centers, but it’s scalable enough to be used in embedded devices as well.
 
Really hope they start the tick tock cycle with MacOS again. Leopard/Snow Leopard, Lion/Mt Lion etc...

Unfortunately it seems we are mostly getting tocks without any ticks.

Not even apple news, messenger apps/stickers, etc. to reach feature parity with iOS.

I'm guessing the jump to 64-bit is predicate to UIkit for mac in MacOS Cupertino, though.
 
  • Like
Reactions: huperniketes
I held off updating my 2011 27" i3 iMac to Sierra because I thought it would be too bloated for my old girl to run. I may roll the dice on high Sierra when it comes out.
 
ZFS can't go into a mobile device, or a watch, tho.

And we already know how Oracle is about other companies adopting "their" features (google).
ZFS is more than scalable enough to go into an embedded device. I use it on a raspberry pi, and it has never had an issue with 1GB of RAM.
 
I'm guessing the jump to 64-bit is predicate to UIkit for mac in MacOS Cupertino, though.

Why should it? iPhones used 32-bit processors until the 5S. And there's no sense to replacing 64-bit AppKit to the less powerful UIKit.

Oh, wait. Dumbing down the OS. Perhaps you're right.
 
I wish I could count the drive failures I have had on one hand. I added 4 or 5 more to this stack this year alone - with probably 2 more beginning to show the first signs of nearing their end.
harddrives_zpsvdm6020g.jpg



I actually do clean installs for every version, usually taking the chance to image the old one and then doing a fresh install - which makes me make sure I can still find the licenses, software etc. as well clean out stuff that is sitting in different temporary directories that were long forgotten etc. Call it annual house cleaning.

The APFS will of course would not be exercised, and not in the same ways on other devices - both in drive manufacturers and of course the desktop OS being a more wild and free environment to do things you just don't get to do on locked down devices.... more chances of running into defects.... that may not have been run into before. I once ran a "batch process" that went through an entire banks transactional database after regression testing.... for a process that was as potentially dangerous as a file system swap.... out of all transactions on every single account.... there was ONE lonely transaction that caused the entire system to abend.... so.... you can never be sure.

I have two 12-bay synology disk stations and have found that I have had no failures in at least two years by sticking with HGST disks.
[doublepost=1497752743][/doublepost]
Why should it? iPhones used 32-bit processors until the 5S. And there's no sense to replacing 64-bit AppKit to the less powerful UIKit.

Oh, wait. Dumbing down the OS. Perhaps you're right.

Because if they are writing new code they don't want to have to bring along the 32-bit baggage. Or the arm chip they stick in macs to accelerate it won't have 32-bit execution units. Who knows.
 
... Craig Federighi said during the DaringFireball podcast from WWDC that during the iOS 10.1 and 10.2 upgrades, they actually "migrated" everyone's filesystems to APFS - that is, built the extra metadata/header blocks in the disk's free space - then ran filesystem checks on the result, and sent diagnostic reports back to Apple if anything was amiss, and then discarded the new blocks, leaving everyone with their unharmed HFS+ filesystems. This explains both why the upgrade is so fast, and why it went quite smoothly when iOS 10.3 rolled out.

and explains why the iOS 10.1 and 10.2 upgrades were so sluggish and burned through user's batteries in half the time.
[doublepost=1497754304][/doublepost]
So the new Mac Pro is basically going to be an external GPU expansion chassis with a modular head.

Its going to be the old Mac Pro with another Thunderbolt port for an external GPU half sized trashcan you can stack the Mac Pro trashcan on top of ;)
 
  • Like
Reactions: Shirasaki
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.