2

There is a problem with undesirable caching of a file by davfs2. A large backup .tar.gz file (4052 Mb) is created on another partition and uploaded. During the process, its copy is created at /var/cache/davfs2/ and results in unnecessary occupation of disk space on main partion.

Cache-related options were disabled /etc/davfs2/davfs2.conf and were accepted by davfs2 but had no positive effect:

cache_size 0
delay_upload 0

Here's a relevant piece of syslog:

Mar 27 03:00:01 ubuntu mount.davfs: davfs2 1.4.7
Mar 27 03:00:01 ubuntu mount.davfs: /sbin/mount.davfs <...> -o rw
<...>
Mar 27 03:12:16 ubuntu mount.davfs: open files exceed max cache size by 4052 MiBytes

How can caching behaviour be totally disabled, so no space would be wasted for davfs2 cache?

Estus Flask
  • 194
  • 2
  • 2
  • 6
  • I dont think davfs2 can work without the cache. You might try a simpler program like `cadaver`, that tries to propose a ftp-like command set. – meuh Mar 27 '17 at 15:31
  • @meuh Thanks, I will keep it in mind. I'm about to update the script to stream .tar.gz to the server, so I guess that mountable WebDAV is my best bet. – Estus Flask Mar 27 '17 at 15:59

1 Answers1

1

I found mounted DAVFS2 shares to be unusable with my storage provider, in combination with many/large files. So I used simple curl wrapper commands for streaming files to a remote server. No tmp files needed!

echo "Creating folder: $(curl -sK /etc/davfs2/curl -X MKCOL $backup_url/$tag)"
echo "Uploading zipped files: $($tarcmd | curl -T- -sK /etc/davfs2/curl $backup_url/$tag/$today.tar.xz)"
echo "Uploading zipped SQL dump: $(mysqldump db_$tag | xz | curl -T- -sK /etc/davfs2/curl $backup_url/$tag/$today.sql.xz)"

My /etc/davfs2/curl contains --user <USER>:<PASS>

Willem
  • 188
  • 8
  • 2
    Nowadays (2021), for a more programmatic sync alternative, `rclone` might come handy: https://rclone.org/webdav/ – sphakka Dec 06 '21 at 16:13