Insane memory useage backupd

Superspeed500

macrumors regular
Original poster
Jul 25, 2013
188
42
I have a 2010 MacBook Pro running High Sierra. The mac usually works great, but it do sometimes get super slow when a time machine backup starts. It sometimes lag so much that I barely can move the cursor over the screen. I noticed that backupd (a time machine process as far as I know) used 9,6 GB of memory. My machine only have 4 GB of memory so i assume that it swaps to hard drive like crazy. The hard drive is an SSD. I have picture from activity monitor here.

Is there anything I can do to prevent backupd from using an insane amount of RAM whenever a Time Machine backup starts? Time machine is backing up to a QNAP NAS on my local network. It's super annoying that TM makes my mac super slow whenever it starts and I do need regular backups, so turning off TM completely is not an alternative.

Note: Docker desktop is installed and the image is backed up to TM. Could that be what causes the problems?

Thanks in advance :)
 

casperes1996

macrumors 601
Jan 26, 2014
4,124
2,005
Horsens, Denmark
Just to clarify, does this happen with every backup or just some of them? Now 4 GB of RAM is not very much, but it is unreasonable for TM to use that much memory. While it makes sense for it to move data into RAM for the backup procedure, it should be able to do this on a block level so no matter what type or size of file it should be able to take it in as small pieces as necessary, and it should never be a priority process
 

Superspeed500

macrumors regular
Original poster
Jul 25, 2013
188
42
Just to clarify, does this happen with every backup or just some of them? Now 4 GB of RAM is not very much, but it is unreasonable for TM to use that much memory. While it makes sense for it to move data into RAM for the backup procedure, it should be able to do this on a block level so no matter what type or size of file it should be able to take it in as small pieces as necessary, and it should never be a priority process
Agree, 4 GB off RAM is nothing these days :)

It doesn't happen every time. Killing the process gracefully solves the problem temporary. I do have to kill the process twice sometimes though. Is there any log files that might have some useful information related to the problem?
 

Brian33

macrumors 6502a
Apr 30, 2008
760
43
USA (Virginia)
Yeah, something's wrong if backupd is using that much memory. I'd look in the log to see if Time Machine is putting out any useful error messages during the problematic backups. (Note that not all TM errors are really a problem -- compare "fast" backups to "slow" backups for differences in messages.)

The only way I know to catch these log messages (in Sierra and later) is with the 'log' command in Terminal.app. Unfortunately the syntax of the command is pretty complicated and I never remember it. Therefore, I stick the command into a shell script with a name I can remember. ...I tried to upload the scripts but it seems shell scripts are not allowed, so I show the commands below...


This command (in bold) will show all Time Machine log messages in the log starting from 3/12/2019, for example.

log show --style syslog --predicate 'senderImagePath contains[cd] "TimeMachine"' --info --start 2019-03-12

The command below will show Time Machine log messages in real-time as a TM backup progresses. You can enter the command first, then start a manual Time Machine backup. Hit Ctrl-C when you're done to exit back to the command line.:

log stream --style syslog --predicate 'senderImagePath contains[cd] "TimeMachine"' --info
 

chabig

macrumors 603
Sep 6, 2002
6,086
3,308
Can you just walk away while the Time Machine backup finishes? You might only need to do this once. Subsequent backups ought to be faster. I suspect that backing up your Docker image is killing it. Try going to Time Machine options and setting it to not back up that file.
 

casperes1996

macrumors 601
Jan 26, 2014
4,124
2,005
Horsens, Denmark
The only way I know to catch these log messages (in Sierra and later) is with the 'log' command in Terminal.app. Unfortunately the syntax of the command is pretty complicated and I never remember it. Therefore, I stick the command into a shell script with a name I can remember. ...I tried to upload the scripts but it seems shell scripts are not allowed, so I show the commands below...
I'm so, so sorry but I am a pedant at heart so I'll have to do this.
The syntax of the command is actually not complicated at all. It's entirely standard syntax. The semantics of it do seem a bit hefty however.

Again, sorry. Your point came across perfectly fine and was entirely valid, but as mentioned, pedantry and whatnot.
 

Superspeed500

macrumors regular
Original poster
Jul 25, 2013
188
42
Thanks for all the answers. I have skimmed through the parts of the log file from 28 February to this date and I found the following:
  • The backups from 28 February to 2 march have for some of the backups the following errors:
    • Not starting scheduled Time Machine backup: Backup already running.
    • Snapshot sizing took longer than 4.00 seconds - timing out!
  • The backup on 4 march had the following Messages:
    • Event store UUIDs don't match for volume: Macintosh SSD
    • Deep event scan at path:/Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/S500-MBP/2019-03-04-161655/Macintosh SSD reason:must scan subdirs|new event db|
    • Running deep scan - looking for changes after 2019-03-04 08:10:50 +0000
  • Lot's of the backups after 4 march have the following message for several different attributes:
    • Failed to remove attribute 'com.apple.backupd.SnapshotVolumeLastFSEventID' from 'file:///', error: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
  • Backup on 6 march have the following message:
    • Error writing to backup log. NSFileHandleOperationException:*** -[NSConcreteFileHandle writeData:]: Input/output error
      (The two bellow errors are probably caused by me closing the lid while a backup was in progress)
    • Failed to eject volume (null) (FSVolumeRefNum: -170; status: -35; dissenting pid: -1)
    • Failed to eject Time Machine disk image: /Volumes/TMBackup/S500-MBP.sparsebundle
    • Error: (-36) SrcErr:NO Copying /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/S500-MBP/2019-03-06-135928/Macintosh SSD/Users/superspeed500/Documents/Prosjekter/Web/klesskap/node_modules/lodash/join.js to /Volumes/Time Machine-sikkerhetskopier/Backups.backupdb/S500-MBP/2019-03-06-135930.inProgress/29599823-214C-4891-BF32-FB3441E30B94/Macintosh SSD/Users/superspeed500/Documents/Prosjekter/Web/klesskap/node_modules/lodash
I have saved result of the first command that Brian33 mentioned on my hard drive. I have not tried the other command.

It also looks like the problem has gone away too, but I am not entirely sure yet. Could the problem be caused by any of the above mentioned log messages?
 

Brian33

macrumors 6502a
Apr 30, 2008
760
43
USA (Virginia)
All the messages up to the 6 March backup are messages I've seen on my own Macs. While some of the messages are annoying, I don't believe they are causing a problem. Backups that do the "Deep event scan" do seem more processor-intensive, but I wouldn't think they'd make the machine "super slow" or laggy, even on a 2010 MBP.


Backup on 6 march have the following message:
  • Error writing to backup log. NSFileHandleOperationException:*** -[NSConcreteFileHandle writeData:]: Input/output error
    (The two bellow errors are probably caused by me closing the lid while a backup was in progress)
  • Failed to eject volume (null) (FSVolumeRefNum: -170; status: -35; dissenting pid: -1)
  • Failed to eject Time Machine disk image: /Volumes/TMBackup/S500-MBP.sparsebundle
  • Error: (-36) SrcErr:NO Copying /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/S500-MBP/2019-03-06-135928/Macintosh SSD/Users/superspeed500/Documents/Prosjekter/Web/klesskap/node_modules/lodash/join.js to /Volumes/Time Machine-sikkerhetskopier/Backups.backupdb/S500-MBP/2019-03-06-135930.inProgress/29599823-214C-4891-BF32-FB3441E30B94/Macintosh SSD/Users/superspeed500/Documents/Prosjekter/Web/klesskap/node_modules/lodash
These messages are more unusual, I think. The "Input/output error" and "SrcErr" make me wonder if there's a problem with the drive you are backing up (but I'm really just guessing, here). Is it a HDD or solid state storage?

Nothing I see helps explain the backupd process using over 9 GB of memory.

So, to sum up, I don't see anything that explains your symptoms -- sorry. If you see the message "Backup completed successfully" after each backup, at least we can feel like the backups are working...
 

Superspeed500

macrumors regular
Original poster
Jul 25, 2013
188
42
All the messages up to the 6 March backup are messages I've seen on my own Macs. While some of the messages are annoying, I don't believe they are causing a problem. Backups that do the "Deep event scan" do seem more processor-intensive, but I wouldn't think they'd make the machine "super slow" or laggy, even on a 2010 MBP.



These messages are more unusual, I think. The "Input/output error" and "SrcErr" make me wonder if there's a problem with the drive you are backing up (but I'm really just guessing, here). Is it a HDD or solid state storage?

Nothing I see helps explain the backupd process using over 9 GB of memory.

So, to sum up, I don't see anything that explains your symptoms -- sorry. If you see the message "Backup completed successfully" after each backup, at least we can feel like the backups are working...
Thanks. The disk I backup to is 2x10TB Seagate IronWolf hard drives in RAID1 inside a Qnap NAS. I doubt that the disks are to blame, since they both report fine in the NAS controll panel.

I do have a theory though when it comes to the insane memory useage. I do some web development in React where I install modules using the NPM package manager. All the packages gets installed in a folder called node_modules. This folder doesn't take up much disk space, but it usually have over 1000 folders inside that gets created whenever you start a new project from template. Trying to backup 1000 files in one go is probably what causes my issues :( I haven't been able to confirm this theory though.