1

I want to do primitive 'virtualization' of my RHEL environment. This comes out of the fact that I have to run a couple of binaries with hardcored paths pointing to eg. /etc/conf directory. There is a couple of different binaries that actually need different, conflicting configuration placed in the same dir.

So I want to run application A that reads conf from /etc/conf and I want to run application B that reads conf from /mnt/etc/conf but actually "believes" that it is reading from /etc/conf by providing a script that "changes" the current / dir location to /mnt.

I've head that for this kind of ops you can do with fakechroot, but it is not available any more. Is that correct? Is there any better way to provide this functionality?

TheMP
  • 143
  • 1
  • 5
  • And neither of these binaries offers any way to specify an alternate configuration file location? Most Linux utilities allow you some latitude to customize things when they are called... What do the man pages for these binaries say? – Thomas N Feb 20 '17 at 15:48
  • Well that's a problem, they are not standard linux binaries but some in-house thing I cannot modify and I have to somehow make it work. – TheMP Feb 20 '17 at 15:53
  • Wow, that's pretty inflexible... How are those hard-coded configurations set? Would it be possible to alter their system environment (e.g., using the 'env' command, or other environment variables)? – Thomas N Feb 20 '17 at 16:09
  • You might look into running this in some kind of container where you expose the directories it needs to modify but otherwise all files are local to that container/application. This would also give you more flexibility in terms of resource control later on. – Bratchley Feb 20 '17 at 16:11
  • @Bratchley What is the simplest container that will allow me to just reuse all the binaries, env variables and files except this one `/etc/conf` thing? – TheMP Feb 20 '17 at 16:13
  • How simple it is depends on what the application actually does. It sounds like you can just create a base image with everything in it and just run different instances of that single image in different containers that have `/etc/conf` mapped to a different directory on the container host. – Bratchley Feb 20 '17 at 16:20
  • For instance: `docker run -it --rm -v /srv/configs/util1:/etc/conf baseImage /path/to/util1` – Bratchley Feb 20 '17 at 16:22
  • I posted a brief walkthrough on the idea since the comments section is limited. You could also try to see if something would work with `mount --bind`+`chroot` (I really don't know) but that's likely more work than doing this in a container. Kind of depends on how much work setting up the binaries are in the first place. – Bratchley Feb 20 '17 at 16:34

1 Answers1

1

You could try using a docker container. Simplest way to do that (so that you don't have to learn Dockerfile syntax) would be to install docker then docker run -it centos:<version> (where is the version of RHEL these utilities are written for) then from inside the container set up all the binaries and executables how they need to be, exit the shell/container and use docker ps to find the container ID.

For example:

[root@website ~]# docker ps -a
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS                  PORTS                   NAMES
42f454cc1c71        centos:latest               "/bin/bash"              About an hour ago   Up About an hour                                sick_davinci

Then use docker commit to save the image:

docker commit 42f454cc1c71 baseImage

Then to run firstUtil (or whatever it's called):

docker run -it --rm -v /srv/configs/firstUtil:/etc/conf baseImage /path/to/firstUtil

Then to run secondUtil:

docker run -it --rm -v /srv/configs/secondUtil:/etc/conf baseImage /path/to/secondUtil

etc, etc

In each of the above docker run commands, it'll run the command in a container based on the image you set up and then the --rm will cause it to be torn down once the given command exits. The --rm is useful so that one run of the utility can't influence the next unless the utility modified something inside of /etc/conf since each time the command runs it's the first time anything's ran in the container.

You can put these docker run commands in a script or shell alias if it makes it easier to type out.

Be aware that docker will by default try to modify the firewall and routing table on the host as well.

Bratchley
  • 16,684
  • 13
  • 64
  • 103
  • Thanks, I've upvoted this answer since it allows me to achieve the goal, but is there a simpler way without having to set up second OS from scratch with Docker? – TheMP Feb 21 '17 at 14:06