1

System Background: I am running CentOS 7 in a VM (in VirtualBox) on a host computer running Windows 10. My /dev/sda is on an SSD and then I have three 1TB HDDs /dev/sdb, /dev/sdc, and /dev/sdd configured into a RAID 5 drive called /dev/md5. The /dev/md5 is formatted as ext4 and is mounted to /raid5.

I wanted to bind mount both /home and /var to sub-directories in my RAID 5, /dev/md5.

The following steps worked just fine for /home:

mkdir /raid5/home
rsync -av /home/*/raid5 /home
mount --bind /raid5/home /home

nano /etc/fstab
...
/raid5/home /home none bind 0 0

I then restart CentOS and it boots no problem. I check df -aTh and both /raid5/home and /home show mounted to /dev/md5.

I follow the exact same process to bind mount /var and /raid5/var and upon reboot I can't even get to the login screen. Same exact commands were used, just substitute /var everywhere you see /home.

FYI, I just started using CentOS/Linux last weekend so I only have 1 week of experience so far. I'm familiar with a lot of the Terminal commands, partitioning, formatting, and mounting drives, installing software, etc. I'm not as familiar with the file/directory permissions (I have a gut feeling when I bind /var to /raid5/var some important software is no longer able to access directories it needs).

sebasth
  • 14,332
  • 4
  • 50
  • 68

2 Answers2

0

I assume you used same step for /var in fstab as for /home. The issue might be caused by incorrect SELinux file contexts (since rsync -a does not copy xattrs).

You should be able to boot SELinux in permissive mode by temporarily changing kernel boot options to include enforcing=0.

If you now can boot, the issue is in incorrect SELinux labels. To fix them, run restorecon -R /var. The system should boot normally at next reboot.

You should also add a file labeling rule to disable automatic relabeling from being applied to /raid5:

semanage fcontext -a -t "<<none>>" "/raid5(/.*)?"
sebasth
  • 14,332
  • 4
  • 50
  • 68
-1

It has been a while, but this is ringing a bell... are any of these logical volumes? I think you may want to read through this answer...

CentOS moving var

As per those instructions: Boot the rescue media. Escape to shell.

Scan for volume groups: lvm vgscan -v

Activate all volume groups: lvm vgchange -a y

List logical volumes: lvm lvs –all

With this information, and the volumes activated, you should be able to mount the volumes: mount /dev/volumegroup/logicalvolume /mountpoint

number9
  • 978
  • 7
  • 15