5

When I run podman run I'm getting a particularly weird error,

❯ podman run -ti --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
✔ docker.io/rancher/rancher:latest
Trying to pull docker.io/rancher/rancher:latest...
Getting image source signatures
[... blob copying...]
Writing manifest to image destination
Storing signatures
  Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argument
Error: Error committing the finished image: error adding layer with blob "sha256:b4b03dbaa949daab471f94bcfd68cbe21c1147e8ec2acfe3f46f1520db48baeb": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argument

What does "potentially insufficient UIDs or GIDs available in user namespace" mean and how can I remedy this problem?

Evan Carroll
  • 28,578
  • 45
  • 164
  • 290
  • I tried as root: `echo 80 > /proc/sys/net/ipv4/ip_unprivileged_port_start` and the as user: `subuidSize=$(($(podman info --format "{{ range .Host.IDMappings.UIDMap }}+{{.Size }}{{end }}")-1)) ; subgidSize=$(( $(podman info --format "{{ range .Host.IDMappings.GIDMap }}+{{.Size}}{{end}}" )-1)); uid=630384594 ; gid=600260513 ; podman --log-level=debug run --uidmap 0:0:5001 --uidmap $uid:5001:$(($subuidSize-5000)) --gidmap 0:0:5001 --gidmap $gid:5001:$(($subgidSize-5000)) --name ranchertest -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged docker.io/rancher/rancher:latest` – Erik Sjölund Feb 04 '22 at 20:28
  • It was just an experiment with __--uidmap__ and __--gidmap__. `podman logs ranchertest` showed some log output. At the end of the log output: `2022/02/04 20:18:15 [INFO] Waiting for k3s to start 2022/02/04 20:18:16 [FATAL] k3s exited with: exit status`'. It looks like the container started but failed very quickly. The variables `subgidSize` and `subgidSize` have the value `65536` – Erik Sjölund Feb 04 '22 at 20:31

2 Answers2

9

The already accepted answer made me go look for a global setting that could produce the same results for all podman invocations. That is pretty important in my case since podman is invoked by another script I don't own.

There is a corresponding flag (ignore_chown_errors) in /etc/containers/storage.conf, which in my system was commented out, so I added the following line to the file:

ignore_chown_errors = "true"

With that change, all podman invocations in the system started working correctly without adding the flag --storage-opt ignore_chown_errors=true to each invocation.

PS: I did not have a problem with filesystems, so I did not try and change the setting for mount_program in that configuration file, but I presume it would have the same global effect across all invocations.

Stephen Kitt
  • 411,918
  • 54
  • 1,065
  • 1,164
Denilson
  • 91
  • 1
  • 3
  • Interesting. Since then, were there any negative consequences? The articles I have found describing the issue are quite confusing and imply that there are constraints imposed when using that override option. Wondering if your team continues to use this approach and if it resulted in any other noticeable limitations. – shawn1874 May 25 '23 at 17:21
8

In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?". Note that this is an option to podman, not to podman run. And as such using it looks like this,

podman --storage-opt ignore_chown_errors=true run [....]

In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs),

podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....]
Evan Carroll
  • 28,578
  • 45
  • 164
  • 290