Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Analog Kid

macrumors G3
Original poster
Mar 4, 2003
9,523
12,965
My system has decided to stop backing up to my Synology NAS, and I suspect it's because of the size of the backup. It claims there's 8+TB to backup, but I'm not sure exactly where the backup fails-- somewhere around the 6TB range, I think. The TM interface isn't helpful in explaining why it failed. I don't get a red 'i' indicating a problem. It just says "waiting to complete first backup". If I say "back up now" it goes through the "preparing the backup" phase for a while and then silently fails.

Other machines are successfully backing up to this same NAS.

This machine backup up fine to a direct attached array.

I've tried deleting the old backup and starting over a few times without luck.

Anybody else use a Synology for a very large TM backup?
 
I just setup a Synology NAS and find a similar problem. It starts backing up and seems to be working fine and then stops. If I manually force it by clicking "back up now" it resumes for a while then stops again.
 
Where would I find the console app? Sorry if it’s a dumb question. This is the first NAS I’ve got.

On your Mac computer open /Applications/Utilities/Console. Search for backupd entires. Maybe it will give you a reason why the backup stopped.
 
  • Like
Reactions: iMi
I just setup a Synology NAS and find a similar problem. It starts backing up and seems to be working fine and then stops. If I manually force it by clicking "back up now" it resumes for a while then stops again.
What I've been finding is that the backups just take a verrrrry long time to complete. I think TM thinks it's done, but the NAS spends hours doing some sort of internal book keeping.

Took me forever to see this because I kept assuming something was wrong and restarting stuff or otherwise intervening.

Let it sit for a few days and then check the Time Machine preference panel to see if it thinks a backup completed. I get hourly backups to my direct attached array, but maybe once a day or every day and a half to the NAS. I think the first one took a couple days between TM finishing its copy and being able to start the next one.

Occasionally I'll get notices that TM can't back up to the array. I ignore them, and eventually it picks itself back up again.

I think it has to do with how the Synology software distributes data across the array and then adds parity. RAIDs hate lots of little files, which is what a TM backup generally is. I've got dual disk redundancy turned on, and I have a mix of different sized drives installed, which are probably all exacerbating the problem.
 
  • Like
Reactions: iMi
What I've been finding is that the backups just take a verrrrry long time to complete. I think TM thinks it's done, but the NAS spends hours doing some sort of internal book keeping.

Took me forever to see this because I kept assuming something was wrong and restarting stuff or otherwise intervening.

Let it sit for a few days and then check the Time Machine preference panel to see if it thinks a backup completed. I get hourly backups to my direct attached array, but maybe once a day or every day and a half to the NAS. I think the first one took a couple days between TM finishing its copy and being able to start the next one.

Occasionally I'll get notices that TM can't back up to the array. I ignore them, and eventually it picks itself back up again.

I think it has to do with how the Synology software distributes data across the array and then adds parity. RAIDs hate lots of little files, which is what a TM backup generally is. I've got dual disk redundancy turned on, and I have a mix of different sized drives installed, which are probably all exacerbating the problem.

Funny you said that because a little while ago I went to the office and noticed that we’re now 60GB ahead. I use SSD cache and that’s 500gb. Read/write. I figured that would negate any parity issues. I guess not. Also, the NAS is verifying disk or what not. It says performance may be impacted. Could that be part of the problem?
 
Funny you said that because a little while ago I went to the office and noticed that we’re now 60GB ahead. I use SSD cache and that’s 500gb. Read/write. I figured that would negate any parity issues. I guess not. Also, the NAS is verifying disk or what not. It says performance may be impacted. Could that be part of the problem?
Yeah, that certainly could be part of the problem as well. I don't have an SSD cache, so I'm not sure what impact it has, but let the time machine complete and then give it way longer than you think it deserves to silently do whatever it does and see if it shows up in the preference panel.

I'm not sure that would be acceptable to me as my primary backup, but as a secondary backup, I can live with it. The direct attached array works as my primary.
 
You guys may know this (maybe not) but Time Machine is slow because its supposed to be slow. Apple made TM a low priority operation with throttled disk operation access.

Archiving and copying your entire drive across a local network for real time user accessibility requires quite a bit of resources. At 100% the TM backup process would have a noticeable impact on system and network performance and of course the NAS likely wouldn't be useable until it was done. Since TM automatically backs up every hour and some people have multiple Macs it wouldn't be a good user experience to have multiple Macs using a lot of resources for a few minutes every hour. Throttling its performance makes it run transparently in the background, albeit slow.

Using terminal commands you can turn off TM's disk operation throttling temporarily for the initial back up. Personally I've never done it but results seem to be 10x faster, if not more....(link to one of the original reddit threads I've seen on the topic).

Speaking of the initial back up its inherently a slower transfer because its tons of random file sizes including a lot of small files. Even if there wasn't bottlenecks and throttling wasn't applied to TM IO access it still wouldn't be fast as large single file transfer we like to gauge as the standard. Once the initial back up is done subsequent back ups are smaller and faster. Although be aware a virtual machine that has only been SLIGHLY modified may make TM back up the entire VM image due to its change. I exclude them from TM.

The initial preparation takes a very long time because its a deep traversal of the entire drive (done at a non evasive rate). There are scenarios that cause a full deep scan to be done again such as an interruption during a backup, mac improperly shut down, drive pulled while mounted, MacOS update, etc.
 
You guys may know this (maybe not) but Time Machine is slow because its supposed to be slow. Apple made TM a low priority operation with throttled disk operation access.

Archiving and copying your entire drive across a local network for real time user accessibility requires quite a bit of resources. At 100% the TM backup process would have a noticeable impact on system and network performance and of course the NAS likely wouldn't be useable until it was done. Since TM automatically backs up every hour and some people have multiple Macs it wouldn't be a good user experience to have multiple Macs using a lot of resources for a few minutes every hour. Throttling its performance makes it run transparently in the background, albeit slow.

Using terminal commands you can turn off TM's disk operation throttling temporarily for the initial back up. Personally I've never done it but results seem to be 10x faster, if not more....(link to one of the original reddit threads I've seen on the topic).

Speaking of the initial back up its inherently a slower transfer because its tons of random file sizes including a lot of small files. Even if there wasn't bottlenecks and throttling wasn't applied to TM IO access it still wouldn't be fast as large single file transfer we like to gauge as the standard. Once the initial back up is done subsequent back ups are smaller and faster. Although be aware a virtual machine that has only been SLIGHLY modified may make TM back up the entire VM image due to its change. I exclude them from TM.

The initial preparation takes a very long time because its a deep traversal of the entire drive (done at a non evasive rate). There are scenarios that cause a full deep scan to be done again such as an interruption during a backup, mac improperly shut down, drive pulled while mounted, MacOS update, etc.

That's an important point, but I think there's more going on than just TM throttling resources. I'm averaging a day and a half between backups to the NAS, hourly to the direct attached RAID. I think it's an interaction with the Synology architecture...

I have tried the command line change in the past (and then reverted)-- it seems to make a difference in how long it took for TM to finish, but it still took forever to start the next backup. I didn't do any controlled experiments or anything though. There seems to be two distinct phases-- the part that TM knows about, and the part that happens after that.
 
That's an important point, but I think there's more going on than just TM throttling resources. I'm averaging a day and a half between backups to the NAS, hourly to the direct attached RAID. I think it's an interaction with the Synology architecture...

I have tried the command line change in the past (and then reverted)-- it seems to make a difference in how long it took for TM to finish, but it still took forever to start the next backup. I didn't do any controlled experiments or anything though. There seems to be two distinct phases-- the part that TM knows about, and the part that happens after that.

You aren't using AFP by chance? While I was able to get it to work the performance was similar to your description regardless of what Apple says....

Screen Shot 2020-07-31 at 8.15.53 PM.png


Performance with AFP was utter garbage. Automatic updates would corrupt. Max throughput was 7-10MB/s. Prep and verifying were slower too. Apple doesn't recommend 'write caching' for TM for some reason btw.

I'm am using a DS216j so there is a high probability your Synology NAS is better than mine. My TM backups are ~2.5TB with my iMac, ~500gb with my MBP. While no where near as large as yours it works very well.
 
  • Like
Reactions: Analog Kid
You aren't using AFP by chance? While I was able to get it to work the performance was similar to your description regardless of what Apple says....

View attachment 939393

Performance with AFP was utter garbage. Automatic updates would corrupt. Max throughput was 7-10MB/s. Prep and verifying were slower too. Apple doesn't recommend 'write caching' for TM for some reason btw.

I'm am using a DS216j so there is a high probability your Synology NAS is better than mine. My TM backups are ~2.5TB with my iMac, ~500gb with my MBP. While no where near as large as yours it works very well.

Yeah, I'd swtiched to SMB some time back. I'd also gotten to work with AFP, but I'd read that AFP is deprecated and Apple is moving to SMB. I don't see an option for write caching, so I'm not sure if that's somehow enabled.

I can't say with certainty why the large backup is choking, but if my theory is right that it's a bookkeeping issue with the asymmetric RAID, then the smaller drive count in your unit might be and advantage. I have a feeling part of my problem is in how Synology balances data storage across multiple drives of different sizes.
 
Yeah, that certainly could be part of the problem as well. I don't have an SSD cache, so I'm not sure what impact it has, but let the time machine complete and then give it way longer than you think it deserves to silently do whatever it does and see if it shows up in the preference panel.

I'm not sure that would be acceptable to me as my primary backup, but as a secondary backup, I can live with it. The direct attached array works as my primary.

Exactly what happened. Initial backup done and now everything is working fine. Disk verification definitely causes some lag as well. Now the NAS is great. I am able to run steam games from it and they work perfectly fine.
 
  • Like
Reactions: Analog Kid
I just updated my 1815+ to an 1821+ with an SSD cache and it is behaving much, much better now. Still slower to complete than my direct attached array, but easily completes within the hour.

DSM7 allows you to pin the BTRF file table into the cache, which I suspect is a particularly big help for these kinds of incremental backup operations.
 
We've had a variety of similar problems involving several Macs and several Synology NAS's. We're on the most current DSM v6 on all and use SMB. Macs are mostly latest Big Sur though one is Catalina and one Monterey.

With my current MBP (Big Sur) to a 3618xs (both w/ 10gb connections) the result is either never finishing the 'preparing backup' even after over 2 days or it seems to hang somewhere beyond that point. A direct attached Sandisk 6tb thumb drive finishes the initial backup in about 3 hrs. Black Magic DST shows 686 MB/s read and 922 MB/s write speeds to the 3618xs which is about 2x what it shows for the 6tb Sandisk (about 400 MB/s read/write).

Other macs to this or one of our other Synology NAS's will work well for a while but then suddenly stop working with the only solution seeming to be to start over again with a new first backup.

We've not had these problems with other DAS or NAS than Synology so this all seems specific to Synology/DSM.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.