0

I've got a script on a local machine that is meant to copy files to a remote server overnight. I currently do it via scp using unpassworded key, as that was the only way for script to work.

The script is working on a client's machine. Several people have access to this machine that are not me. Remote machine is a cloud instance belonging to me. Even though nothing is expected to happen, it's still better to be safe. So I just want the script to work automatically and yet safely, so no one with access to local machine could access my cloud machine

I'm not feeling like leaving it this way forever since it's as insecure as it gets. However, I wasn't able to find any secure solution to accomplish the task. I've been looking at Rclone but it seems to leave machine in that same unprotected state.

What would be the best option to solve the issue most secure way? I've been thinking about creating specific user on remote and restrict access only to the folder it copies files from, however this leaves this folder freely accessible with the key anyway.

Stewart
  • 12,628
  • 1
  • 37
  • 80
  • Please define "secure" in your context. Specifically, what threats are you worried about? – roaima Feb 12 '22 at 14:17
  • The script is working on a client's machine. Several people have access to this machine except for me. Remote machine is mine cloud instance. Even though nothing is expected to happen, it's still better to be safe. So I just want script to work automatically and yet the safe way, so no one with access to local machine could access my cloud machine – Vitaly Varyvdin Feb 12 '22 at 15:11
  • Can't scp from remote to local due to local machine being unexposed to public. – Vitaly Varyvdin Feb 12 '22 at 15:31
  • 1
    web server (nginx/apache) + web auth + webdav – Artem S. Tashkinov Feb 12 '22 at 16:36

2 Answers2

0

Your answer is to use sftp. It's not much of a change from what you're currently doing as it is also done over an ssh connection.

sftp is a file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression

First you should create an independent user for your customer. It should have a root-owned home directory, their public key to .ssh/authorized_keys, and a self-owned directory for write-access.

$ sudo mkdir -p /home/tmpuser/.ssh
$ sudo mkdir -p /home/tmpuser/uploads
$ sudo cp ~/.ssh/id_rsa.pub /home/tmpuser/.ssh/authorized_keys
$ sudo adduser --home /home/tmpuser --disabled-password tmpuser
$ sudo chown tmpuser /home/tmpuser/uploads

It's fine to use the same passwordless SSH token. I imagine you're worried your customer will share their private key with someone else. That 'someone else' can only see stuff belonging to your customer's user, not any other user, and not the rest of your machine. It's their risk, not yours. You can mitigate this by monitoring uploads/ and moving new files out of the chroot on arrival, or deleting the files after processing. Obviously never "trust" incoming files (e.g. never execute them).

Next, in /etc/ssh/sshd_config, setup internal-sftp and force that on this user:

#Subsystem      sftp    /usr/lib/openssh/sftp-server <-- comment this line out
Subsystem       stfp    internal-sftp

# At the bottom of the file add this:
Match User tmpuser
        ForceCommand internal-sftp
        X11Forwarding no
        AllowTcpForwarding no
        ChrootDirectory %h

Now reset sshd with sudo systemctl restart sshd, then try it out:

# Confirm normal ssh is impossible
$ ssh tmpuser@localhost
This service allows sftp connections only.
Connection to localhost closed.

# Confirm scp is impossible
$ scp testfile tmpuser@localhost:
This service allows sftp connections only.

# Check that you can "put" a file in uploads/
$ echo "put testfile uploads/" | sftp -b - tmpuser@localhost 
sftp> put testfile uploads/

# Check that you can connect via sftp and can only use sftp commands
$ sftp tmpuser@localhost
Connected to localhost.

# Check that a basic command works
sftp> ls
uploads  

# Check that we are indeed in a chroot
sftp> pwd
Remote working directory: /

# Check that our testfile was actually uploaded
sftp> ls uploads
uploads/testfile   

# Try to get out of the chroot
sftp> cd ..
sftp> ls
uploads  

# Try to run a non-sftp command
sftp> ps
Invalid command.
Stewart
  • 12,628
  • 1
  • 37
  • 80
  • Re-worked this answer as the `ChrootDirectory` option wasn't working in my first iteration. `%h` needed to be owned by root. – Stewart Feb 13 '22 at 08:02
0

Basically you need to insulate the server from the client as much as possible. Since the server acts (according to your description) only as a sink, you are probably looking for a way to make the access write-only and preferably with some input checking (at least you want to impose a size limit to prevent possible DoS).

If you want to go with SSH, you can use the ForceCommand option and set it to e.g. a filter script, that will just pipe the input into an appropriate file on the server. You may also be interested in the command="command" option that can be specified per-key in the AuthorizedKeysFile. See sshd(8) and sshd_config(5) man pages for more details.

One alternative to that is a simple REST API that will only allow uploads - if you chose HTTP, you'd basically need to provide one request method handler only: POST (or PUT, depending on your exact setup). Authentication with a certificate (preferably with a short expiration and bound to a particular source system, if possible).

peterph
  • 30,520
  • 2
  • 69
  • 75