10

I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files?

I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.

Matt Vollrath
  • 2,419
  • 1
  • 12
  • 15
dragostis
  • 203
  • 2
  • 6
  • I don't believe you can do this with the standard unix permissions. You may want to check man pages for `setfacl` and `getfacl` to see if anything there to help you, but I seriously doubt it Your saving grace might be setting up some sort of tripwire and watch the contents of this directory and run a `cp` upon detecting new files. – MelBurslan Feb 26 '16 at 14:49
  • I don't think it will work with `setfacl`. I was thinking of maybe coding something for this. – dragostis Feb 26 '16 at 14:52
  • I take it you do not have the source for the binary, and so cannot modify it? – Faheem Mitha Feb 26 '16 at 14:59

3 Answers3

10

chattr +a /tmp/*some folder* will set the folder to be append-only. Files can be created and written to but not deleted. Use chattr -a /tmp/*some folder* when you're done.

doneal24
  • 4,910
  • 2
  • 16
  • 33
9

You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder. For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak:

#!/bin/sh

ORIG_DIR=/tmp/some_folder
CLONE_DIR=/tmp/some_folder_bak

mkdir -p $CLONE_DIR

inotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do
  echo $file
  DIR=`dirname "$file"`
  mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}"
  cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"
done

Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally.

Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file.

If you want to clone all changes to /tmp, you can use a more distributed version of the script:

#!/bin/sh

TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR

wait_dir() {
  inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
    echo $file
    DIR=`dirname "$file"`
    mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
    cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
  done
}

trap "trap - TERM && kill -- -$$" INT TERM EXIT

inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
  if ! [ -d "$file" ]; then
    continue
  fi

  echo "setting up wait for $file"
  wait_dir "$file" &
done
Matt Vollrath
  • 2,419
  • 1
  • 12
  • 15
  • Does this work if someone writes into the file? If I do `echo "something" >> some_file` this won't always update it. – dragostis Feb 26 '16 at 15:11
  • Yes, the contents of the hard linked copy will be updated. See https://en.wikipedia.org/wiki/Hard_link – Matt Vollrath Feb 26 '16 at 15:14
  • Can this be extended to work with a directory pattern? What I need is actually `some_folder-*`. – dragostis Feb 26 '16 at 15:17
  • Why not just run it on `/tmp`? It won't use any additional disk space because hard links are only references. Just don't do anything recursive like clone `/tmp` to `/tmp/foo` – Matt Vollrath Feb 26 '16 at 15:18
  • Actually that won't work if you try to hard link to a different mount point. Let me see what can be done. – Matt Vollrath Feb 26 '16 at 15:21
  • It does work. :D – dragostis Feb 26 '16 at 15:25
  • @dragostis I've added a distributed version that adds a watcher for each subdirectory of `/tmp`. – Matt Vollrath Feb 26 '16 at 15:59
  • 1
    @MattVollrath You say hard link, but your `cp` command makes copies, not hard links. Did you mean `-l` instead of `-L`? At least this would be the correct option for GNU coreutils cp. – Dubu Feb 26 '16 at 16:45
1

If the programs executed from /tmp are still running, you can usually still retrieve the original binary even if it is "deleted" from the filesystem, because the inode still exists with the data; the removal is just unlinking the name from the directory.

In Linux, you can access the inode's contents via the /proc/PID/exe link. Tools like ls will show you the original path and mark the link as broken (colorwise) and the listing will say something like "(deleted)" in the name. However, you can still retrieve it by reading the file.

An example showing this concept (using sleep as an illustrative tool):

$ cp /bin/sleep /tmp/otherprog
$ /tmp/otherprog 300 &
[1] 3572
$ rm /tmp/otherprog
$ ls -l /proc/3572/exe
lrwxrwxrwx 1 john john 0 Feb 27 08:54 /proc/3572/exe -> /tmp/otherprog (deleted)
$ cp /proc/3572/exe /tmp/saved
$ diff /tmp/saved /bin/sleep
$ echo $?
0

I created a "new" program by copying the contents of the sleep program to a new program called "otherprog" and ran it such that it would keep running for a while. Then I deleted the program from /tmp. Using the PID I got from the shell (you can find the PIDs of the programs you care about via ps) I looked at the exe link in /proc, then copied the contents of the file (even though target file name is gone), and checked that the contents match the original.

This of course won't work if the programs from /tmp are short-lived, because once they exit, the link count of the inode will drop to where the data is actually freed from disk.

It does avoid racing to copy the file before it is unlinked from the /tmp directory.

John O'M.
  • 211
  • 1
  • 6