I'd like to find out which parts of a frequent backup (with only few changes in between) take the longest to reduce the time needed for it and reduce the I/O stream.
I'm using backintime (BiT) for the backup on my Debian10/KDE machine.
I think options for finding out are:
- somehow examining the running sync process
- For example by running
sudo lsof -c rsync | grep "backup/"to show which files are currently being backedup. However, this exemplary command isn't very useful.
- For example by running
- analyzing rsync logs and/or
- changing the rsync parameters (BiT has the option "paste additional options to rsync") and/or
- somehow facilitating changes to the rsync and/or BiT software to put out such information (preferably comparative duration subprocesses or logs that include relevant information)
- I have created an issue at BiT here and it doesn't seem to be possible with BiT as of right now.
- and/or maybe something else
- One indirect option would be to manually, separately check which of the included directories have the largest number of files and which files are the largest. However, these might not be the only thing that take a long time - e.g. as I have checked the BiT option "Use checksum to detect changes".
How to speed things up would be a separate question - for example it might be possible to "hash directories" to detect whether or not there was a change in it (modification/addition/removal) since the last backup instead of checking every file of directories which contain many files or by changing metainformation of files. But first I'd like to find out how to find out what is taking backups long.