Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Because if they are writing new code they don't want to have to bring along the 32-bit baggage. Or the arm chip they stick in macs to accelerate it won't have 32-bit execution units. Who knows.

Writing new code has nothing to do with bringing along 32-bit frameworks, especially if they adopted ARM for the Mac's main CPU. macOS has been 64-bit since Tiger.
 
I'm disappointed that High Sierra seems to have no improvements to the performance of Time Machine. I've whined ad infinitum here about how slow this tool is. Right now, I'm in the middle of a 9.5 GB backup that TM says will take 18 hours. That is more than just plainly ridiculous. This isn't a huge new release that's being backup up. It's just a normal incremental backup. 18 hours? I mean, come on. Sure, maybe my mid-2012 rMBP having 11n wifi is having an impact. But there's no reason why a backup takes this long. At least there needs to be some kind of checkpoint built into it so that if the wifi connection is lost, like when I put the system to sleep, backup progress isn't lost and it can pick up where it left off the next time wifi comes up, and I don't have to start over. I like the functionality of TM, but I'm at the point where I want to find a different tool that actually works. As I've said before, my Yoga takes 18 minutes to backup. That's reasonable. 18 hours is STUPID. Oh, sorry, I just checked up on how the backup is going, and the estimated time is now up to 19 hours.

Come on, Apple, put a good backup tool into this product. This one is so unusable that I'll NEVER be able to take a full backup. That is irresponsible. Tim, you listening?
Are you backing up VM drives? Because hat you describe doesn’t make any sense unless you’re doing something atypical. TM runs great for all 6 of my machines backing up to the same NAS.
 
Wait what? That's faster than the disk's theoretical throughput. You sure it's not just creating a pointer to the original file?

Edit: I read that it doesn't actually copy it; just creates a pointer. And blocks are copied as needed if you modify one file. I can imagine this being very useful for version control, and I'm thinking of how they can improve Time Machine with this. So, if I copy to an external disk for transfer or backup purposes, does it *actually* copy it? Has to.
[doublepost=1497722765][/doublepost]
No it doesn't. What if the cable connecting to my SSD flips some bits, or the firmware does it? These kinds of checks are supposed to be end-to-end, both for practical and design reasons.
Versioning file systems (where files of the same name but different versions) have existed for at least 40 years. I am sadded that no mainstream OS is using them or at least has the option to use one. I can remember many instances where this feature saved me lots and lots of work when an edit or a series of edits went wrong. Yes you can do it with systems like Git but to have it there in the OS and enbabled by default was wonderful.
 
Versioning file systems (where files of the same name but different versions) have existed for at least 40 years. I am sadded that no mainstream OS is using them or at least has the option to use one. I can remember many instances where this feature saved me lots and lots of work when an edit or a series of edits went wrong. Yes you can do it with systems like Git but to have it there in the OS and enbabled by default was wonderful.
Yeah, that was my thought, but I didn't know other filesystems did this. One problem with git is that it can't handle large files well.
 
No wonder Apple called it high Serra... Its "high" on their agenda... H.265 :D

(I wonder if that means updates will come out more regularly)
 
Last edited:
No wonder Apple called it high Serra... Its "high" on their agenda... H.265 :D

(I wonder if that means updates will come out more regularly)
I think they boxed themselves into tradition and having half and now a full toc they ran out of x Sierra names.... For some reason they are avoiding my favourite (suggestion) on a name.... "Death Valley"
[doublepost=1497783546][/doublepost]It would be nice if they took the effort to take non-core applications and convert them into "bought" applications (but also free) applications on upgrade and allowed them to be installed separately -- if the user wanted (non-core would be applications such as "chess", "dvd player", and even potentially some store connected apps like "book reader" to name a few).
 
I think they boxed themselves into tradition and having half and now a full toc they ran out of x Sierra names.... For some reason they are avoiding my favourite (suggestion) on a name.... "Death Valley"

I love it! :apple::D

[doublepost=1497783546][/doublepost]It would be nice if they took the effort to take non-core applications and convert them into "bought" applications (but also free) applications on upgrade and allowed them to be installed separately -- if the user wanted (non-core would be applications such as "chess", "dvd player", and even potentially some store connected apps like "book reader" to name a few).

Agreed! They should also introduce some games as well, like solitaire and minefield! :D
 
Well it makes sense to not put any resources into that. You would want to control your home from something that's always with you, like the phone.

If users only wanted to control their home from their phone, why would Apple put HomeKit functionality in the Apple TV? And how do you expect to control devices from the phone when you're away if there aren't hubs in the home which the phone can communicate with?
 
Well it makes sense to not put any resources into [HomeKit]. You would want to control your home from something that's always with you, like the phone.
Or convenient, like HomePod (or Echo or Google Home).

On the other hand, Apple has made it possible to make phone calls or use FaceTime using your Mac. When you're working (or playing) in front of your MacBook or iMac, sometimes that's the most convenient device to use to see who's at the door or whether the lights are on in the garage. My phone is almost always with me when I'm away from home, but not necessarily when I am home.
 
If users only wanted to control their home from their phone, why would Apple put HomeKit functionality in the Apple TV? And how do you expect to control devices from the phone when you're away if there aren't hubs in the home which the phone can communicate with?

So that they don't have to add it to macOS?
 
"and in High Sierra and iOS 11, all of your iMessage conversations are saved in iCloud, saving more storage space."

This is disturbing. Using Signal more and more.
I like the autoplay management in Safari.
What happens if USB used to manually transfer files, say MS Word or graphics, to an older system? Or vice-versa after upgrade to APFS?​
 
Anyone had any real experience with APFS on the Mac? Are there any real performance improvements? Or did the system just got slower, like it did with the iPhone?
 
Seriously, is it really too much to ask to bring a News app to macOS? I've emailed Apple about this multiple times in the past 2 years. I spend way more time on my MacBook than I do on my iPhone.
Mail.app used to have an RSS reader built-in, but they took that out years ago. Thunderbird has an RSS reader, but I'm not sure if you're ready to switch email clients. But if you do switch to Thunderbird, make sure to turn off the Global Search and Indexer - it uses a lot of CPU power for no apparent reason.
 
Are you backing up VM drives? Because hat you describe doesn’t make any sense unless you’re doing something atypical. TM runs great for all 6 of my machines backing up to the same NAS.
Nope. I'm just doing vanilla backups, no VMs. I tried them over an ethernet connection (with an Apple Thunderbolt to Enet adapter and a CAT5 cable plugged into my Asus AC3100 router, with the NAS plugged into that) and got the same result. Then I tried it by plugging my Mac directly into my NAS and got the same result with that. I thought this all might have meant that it was the NAS itself that was so slow, but since directly connecting to its gigabit connector had the same results, now I don't. I still think it's an issue with Time Machine, your results notwithstanding. I say that because I also back up my Yoga to it, which is less data, but is still only 18 minutes over an 11ac connection.
 
That is not what that specification means in the slightest. Learn what it means and then come back.

You're right, it's not exactly what it means. However, it is a very functional simplification. Seeing as this isn't a thread about hardware I was somewhat hoping to avoid talking about the details... but apparently I cannot.

So first things first... statistics. The general consumer drive specification shows "non-recoverable read errors per bits read" and tends to report "<1 in 10^14". Which means that 1 in 10^14 is the worst case scenario (less than). But this is an average taken over a large number of drives. 10^14 bits is 100,000,000,000,000 which is around 11.4 Terabytes of data (base 2). Being an average, you could read 100TB of of your disk and come back with no issues, or you could read 1TB and have an issue. Generally most people will be closer to the middle of that bell curve (8-14TB).

What happens if you flip a bit? The first bulwark is the drive CRC. Depending on the drive you will probably have either 4096 bits (512n) or 32,768 bits (4Kn) per sector. One bit flip in a sector will usually result in a slow read as the drive firmware transparently reconstructs the sector from the CRC hash. If it is able to reconstruct the sector it will pass along the read to the system, and then go re-write the correct data to the reallocated sector in the spare pool.

This happens constantly, probably daily. Something has caused a bit flip in resident data or on read (platter deposition issue, random energized particles, stray solar radiation, a bit of dust in the enclosure, electrical noise, and many other possibilities). The drive firmware compares the data to the CRC hash and it doesn't match, so the drive has to figure out if the flip is in the hash or the data. Usually it can, but sometimes it can not and then you will get an Unrecoverable Read Error (URE). If it can, it will then either regenerate the hash from the good data or reconstruct the sector using the CRC hash.

The big issue comes when a sector acquires two flipped bits in a single sector. Then you essentially always have a URE because there is no way to reconstruct the sector. Interestingly, modern 4Kn sectors are much more likely to acquire two errors in a sector than the old 512n sectors by virtue of being many times larger.

If you read data often then you are likely to capture single bit issues in sectors and correct them. However, data you read very rarely runs into issues because the bit errors go unaddressed and it becomes much more likely that you will acquire two flips in a sector and a URE. Data on a mechanical platter degrades over time, as you use your computer, every hour, every day.

This covers traditional spinning rust media. SSDs are a bit different. Better in some ways and worse in others (bad failure modes). Your backup disk is probably the place that needs data integrity most of all because you should be actively reading some of the stuff on your main drive. Whereas you rarely read from your backup drive and it spends a lot of time down. How often should you read all your data? Once every six months? Probably more often if possible. That means reading every single byte off disk. A few caveats here: 1. With HFS+ and APFS you might not know if you have an issue even if you read every file. 2. There is no good mechanism to read every byte off disk (maybe dd?). 3. What do you do if you know a file is corrupt? You can't know how long it was corrupt or if the backup is corrupt.

To come full circle, it is useful to use a rule of thumb to model risk rather than having to do the math every time. To that end, I like to use the 12TB of reads per error for consumer mechanical drives. In practice in my arrays that actually seems about right. Scrubs tend to yield an issue every 8TB-15TB of reads. Although that takes into account much more than problems at the read level. It's firmware bugs in the disk and PCH, connector issues, electron leakage across paths, and so on.


It is always a balance though and APFS and the newer hardware does move the need... and if you let it just sit and "rot" no level of redundancy will save you.
Actually, automatic scrubbing and a good file system will save you... but lets just pretend it won't???


Dealing in any licensing with Oracle is toxic. Tim's not stupid enough to even try.
They could have just built it into APFS and not dealt with Oracle at all? I'm talking about a feature anybody can implement, not implementing ZFS.

Besides that, most of the missing features from ZFS are really server-related. They'd be "nice to have" on workstations, but outside the Unix/Linux crowd nobody would know what to do with these features. Apple is solidly out of the server business, they don't really need to chase features like these because they are going for "computing appliances" and storing everything important on cloud servers now.
This is just so wrong-headed. We all want better technology. If you don't want better technology, I don't think I can ever see from your point of view. I want my devices to work better, more reliably, be more correct, be faster, be more efficient all the time. If you don't, and you're satisfied with what we have now, I don't know what you're doing in a computer enthusiast forum.

Microsoft only supports ReFS on servers right now. Ubuntu uses ZFS only by the grace of Oracle who could cancel permission at any time. BTRFS on Linux is still in late-beta where it's 90% great but certain configurations are supported with "it Might explode" level support. Effectively zero consumer devices have such support you're demanding right now. And we've done fine so far.
OS X was founded on the idea that server-grade software was appropriate and empowering for consumers. From the UNIX core to the built-in Apache server. 64-bit, ACLs, a robust network stack with LACP. Consumers don't need any of this. But it makes for a much better system if everybody gets access to the big guns. I don't want to hear anybody backing off on data integrity because "only servers need it". That's a horrific cop-out. You want it because it's good and there's no reason you shouldn't have it. Seriously, what has happened to the Mac OS user base? Has everybody stopped being ambitious about great technology? That's depressing.

The fact that every other company has data integrity somewhere in their pipeline is heartening. The fact that Apple does not is ugly. Also currently Synology is shipping part of BTRFS in production for its block checksumming. So that part at least seems to be complete and working. Ubuntu I believe forked the open source version of ZFS, it's not tied to Oracle in that sense.

ZFS is effectively dead software now (and Solaris as well) unless Oracle changes its ways because we have draconian IP laws. The era of companies with tens of millions to devote to pursuing arcane pure data science died with SUN. "A few mistakes in a billion" is good enough for 95% of people. That's all that the bean counters care about.
Fortunately for us, you're wrong. The current open source mainline of ZFS is active and robust with lots of users. Oracle's version is seen as a fork with closed source. As far as your attitude about data integrity, it's pretty appalling. I never thought I would find so much hostility towards good technology in an Apple forum.

If it's that bad then tell me why my OS has always worked since the *beginning.

*The beginning= I started using OS X right from the beginning (10.0.b), I never ever reinstalled yet my system runs still as before, healthy.
I am sure that my system used way more than 12 TB in that period, yet I never ran into problems.

If it's just one single bit that means that one of my photo's just misses less than a pixel, so what!
It almost certainly has happened, you just have never noticed or you haven't read back that file yet. Heck, you've probably experienced HFS+ corruption also, but you must not have noticed. I doubt you check system.log either if you think your system is "running healthy". You shouldn't have to do any of that and your system should take care of itself, the fact that it doesn't is super bad.

Also generally with JPEGs if you get a corrupt read the rest of the image becomes unreadable starting from the corrupt point looking something like this: https://d1ro734fq21xhf.cloudfront.net/attachments/00Xe3e-299791684.jpg
 
When installing High Sierra, it will convert to a new, more modern file system called Apple File System or APFS. APFS is safe, secure, and optimized for.....

I hope the Apple File System is optimized for the Kaby Lake Intel NUC.
 
Don't sugar coat this. Apple are looking to release Sierra 'S'. Lazy
that's normal (or, at least, there's a pattern that's been shown)

once apple shifted to yearly releases of the OS instead of every two years, the 'S' year has been a refined version of the original release and named accordingly.

there was Leopard -> Snow Leopard (though iirc, these were two years apart?)
Lion -> Mountain Lion
Mavericks (had no 'S' year)
Yosemite -> El Capitan (El Capitan is a formation within Yosemite park)
and now there's Sierra -> High Sierra

just sayin.
[doublepost=1497803948][/doublepost]
Really hope they start the tick tock cycle with MacOS again. Leopard/Snow Leopard, Lion/Mt Lion etc...

heh.. see above ;)
 
that's normal (or, at least, there's a pattern that's been shown)

once apple shifted to yearly releases of the OS instead of every two years, the 'S' year has been a refined version of the original release and named accordingly.

there was Leopard -> Snow Leopard (though iirc, these were two years apart?)
Lion -> Mountain Lion
Mavericks (had no 'S' year)
Yosemite -> El Capitan (El Capitan is a formation within Yosemite park)
and now there's Sierra -> High Sierra

just sayin.
[doublepost=1497803948][/doublepost]

heh.. see above ;)
oooo, that is why people kept giving me a weird look..

I thought it was Yo' Semite o_O
 
  • Like
Reactions: flat five
So with APFS and High Sierra seemingly to be optimized for an SSD, do we still have to manually enable trim for third party drives?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.