You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Perhaps your data has a large number of hard links and restic doesn't bother tracking those since the deduplication will eliminate the extra data anyway?
Just guessing at possible reasons. I don't know how hard links are actually handled.
When backing up a webserver with many html-root dirs I get a bigger scan result than expected
Output of
restic version
restic 0.2.0 (v0.2.0-181-g24385ff-dirty)
compiled at 2016-09-07 14:22:18 with go1.7
Expected behavior
restic scans through the directories and gets a combined filezize close to df.
Output of df:
~# df -h
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
rootfs 443G 324G 97G 78% /
/dev/root 443G 324G 97G 78% /
devtmpfs 4,9G 0 4,9G 0% /dev
tmpfs 1001M 164K 1001M 1% /run
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 2,0G 0 2,0G 0% /run/shm
Number of webspaces:
~# ls -l /var/www/ | wc -l
2308
Actual behavior
restic scan finishes at about 3x the used space on / after about 50mins and then begins backing up data
Steps to reproduce the behavior
start a backup on a "larger" web server
restic -r sftp://backup@[....]:2244/restic backup /var/www
The text was updated successfully, but these errors were encountered: