Recently, I've run into this implementation of a handicraft "keepalive" watchdog for a custom HTTP server:
|
| keepalive.sh
| init.sh
|_ main.py
There are three files. The first one being keepalive.sh: it is run as root from rc.local when the system starts up.
# keepalive.sh
while true
do
if curl -s "localhost:8080/alive.txt" | grep -q "ALIVE"; then
echo "UP
else
echo "Restarting..."
pkill -u www-data -SIGKILL
fi
sleep 30
done
The second one is init.sh which is run as www-data user.
# init.sh
set -m
/usr/bin/python3 /var/www/main.py &
while true; do fg; sleep 1; done
The third one is the custom server itself main.py which needs to be kept alive.
from server import Server
HOST = '0.0.0.0'
PORT = 8080
if __name__ == "__main__":
Server(HOST,PORT).listen()
I don't quite understand how init.sh works. With -m flag we enable job control in order to use fg from script file. What is the purpose of sending main.py to background and then doing fg in an infinite while? I understand that & is needed here to get to the loop after the server was started but why exactly fg (and not just true, for example) is used?
The other thing I do not understand is why init.sh restarts when it is killed with -SIGKILL. The Python script is launched in a separate process group with init.sh's PID as its own PPID. main.py gets killed when his parent (init.sh) gets killed, I'm all right with that. For me the question is why init.sh keeps restarting main.py (and also itself) after it's being killed with pkill.
Thanks!