0

tl;dr How can processes writing data to disk be temporarily stopped, so as to prevent them to screwup fsfreeze/sync?

long story

My question is about a process that is writing data to be persisted on disk (storage device) in a "runaway" fashion.

this for instance is a process that I consider "runaway"

yes "$(base64 /dev/urandom | head -n 20)" > /mnt/fs1/file

since in all likeliness would be able to generate data faster than it can be writen to disk. (this could even be worse see below)

Doing a sync/fsfreeze/(btrfs subvolume snapshot) such a "runaway"-process would appear to become an obstacle or at the least a considerable delay. Indeed I have experimented the fsfreeze on a btrfs and it "bit its own tail" as it hang itself.

As an answer I seek advice how to prevent any such processes to be running while I persist the current buffered fs data to disk.

A process that is such a "runaway-writer" would appear unlikely at first, as it would anyway fill up the available storage space (free space available in the filesystem). This seems to be however not the case as a writer could continously overwrite the same file content in such a fashion

RANDSTRING="$(base64 /dev/urandom | head -n 20 | tr -d '\n')"
while true
do
  yes "$RANDSTRING" | head -c 100M > /mnt/fs1/file
done

Above bash script could also be rewriten in a small C program so as to avoid multiple open system calls, and just seek to the start of the /mnt/fs1/file. Bottom line being that there is constantly new data at a higher rate as can be writen out to the storage device, which should make sync and fsfreeze to never actually successfully finish.

humanityANDpeace
  • 13,722
  • 13
  • 61
  • 107
  • Couldn't you just `kill -STOP` them? –  Feb 12 '21 at 09:48
  • @UncleBilly I am not sure. I guess this would require to know the *runaway* processes and fail for the more generic case, right? Or do you imply a way to `kill -STOP [all processes]` (not sure if this i actually possible? – humanityANDpeace Feb 12 '21 at 10:23

0 Answers0