3

I have an HP N40L microserver, with 2 identical drives, I used the system to hardware-RAID them as a mirror. I then installed mint on the system about a year ago.

This has been running perfectly, updating, etc. until I upgraded to Mint 17.

I thought everything was fine, but I've noticed that mint is only using 1 of the drives to boot, then for some reason was showing the contents of the other drive.

i.e. it boots sdb1, but df shows sda1. I'm sure df used to show a /dev/mapper/pdc_bejigbccdb1 drive which was the RAID array. Thus any updates to Grub go to sda1, but it boots sdb1 then loads the fs sda1.

N40L marty # df
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sda1      233159608 113675036 107617644  52% /
none                   4         0         4   0% /sys/fs/cgroup
/dev             2943932        12   2943920   1% /media/sda1/dev
tmpfs             597588      1232    596356   1% /run
none                5120         0      5120   0% /run/lock
none             2987920         0   2987920   0% /run/shm
none              102400         4    102396   1% /run/user

From cat /etc/fstab

N40L marty # cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/mapper/pdc_bejigbccdb1 /               ext4    errors=remount-ro 0       1
/dev/mapper/pdc_bejigbccdb5 none            swap    sw              0       0

If I do ls /dev/mapper/ I get

N40L marty # ls /dev/mapper
total 0
crw------- 1 root root 10, 236 Jul 24 17:03 control

How do I get my raid back and how do I get grub to boot to it?


Further update:

N40L grub # dmraid -r
/dev/sdb: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0
/dev/sda: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0

N40L grub # dmraid -s
*** Set
name   : pdc_bejigbccdb
size   : 486328064
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

N40L grub # dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr     discovering
NOTICE: /dev/sdb: ddf1    discovering
NOTICE: /dev/sdb: hpt37x  discovering
NOTICE: /dev/sdb: hpt45x  discovering
NOTICE: /dev/sdb: isw     discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi     discovering
NOTICE: /dev/sdb: nvidia  discovering
NOTICE: /dev/sdb: pdc     discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil     discovering
NOTICE: /dev/sdb: via     discovering
NOTICE: /dev/sda: asr     discovering
NOTICE: /dev/sda: ddf1    discovering
NOTICE: /dev/sda: hpt37x  discovering
NOTICE: /dev/sda: hpt45x  discovering
NOTICE: /dev/sda: isw     discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi     discovering
NOTICE: /dev/sda: nvidia  discovering
NOTICE: /dev/sda: pdc     discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil     discovering
NOTICE: /dev/sda: via     discovering
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
NOTICE: added /dev/sdb to RAID set "pdc_bejigbccdb"
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
NOTICE: added /dev/sda to RAID set "pdc_bejigbccdb"
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
RAID set "pdc_bejigbccdb" was not activated
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "pdc_bejigbccdb"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sda"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sdb"

So my system sees the two drives and sees they should be part of an array, but will not activate the array and this not create /dev/mapper/pdc_bejigbccdb so I cannot load grub to it and boot from it.

How do I get dmraid to activate and create the mapper entry?

wkdmarty
  • 251
  • 2
  • 6
  • Check your kernel command line options, they are in /proc/cmdline, sometimes / is mounted by kernel on boot not something from fstab. It is might be quite complicated to boot from raid - because you kernel needs assemble raid before it has access to any configuration files. Usually this is implemented by initrd which is highly distribution-specific and may contain a lot of mistakes. You need to check all this chain: What is mounted on kernel boot? Do you have initrd? After that extract your initrd and see how real root is mounted there. – gena2x Jul 25 '14 at 10:45
  • Unfortunately, I have a small knowledge of Linux, and this is pushing it. in /proc/cmdline is BOOT_IMAGE=/boot/vmlinuz-3.13.0-29-generic root=UUID=94082159-4d98-4b3c-a5d7-356d7f57bb7e ro quiet splash nomdmonddf nomdmonisw nomdmonddf nomdmonisw Both of my drives (sda and sdb) have the same UUID. – wkdmarty Jul 25 '14 at 11:48
  • Fakeraid happened. I'm not sure exactly how, but problems like this are why you should use either true hardware RAID (where the OS would never see a disk as a separate entity), or Linux's own software RAID. – Gilles 'SO- stop being evil' Jul 25 '14 at 20:45
  • I thought it was hardware RAID because it's setup via the BIOS? (or the RAID BIOS in the POST) I did this before I installed the OS. I'm confused. Is there any way out of this? – wkdmarty Jul 28 '14 at 08:28
  • OK, further update. I can boot from a Mint 17 Live CD and it correctly shows the raid drive in mapper. So I'm thinking it must be one of the updates that has killed the raid? Can anyone point me in the right direction as to where to look for and downgrade the offending program? – wkdmarty Jul 31 '14 at 13:20

1 Answers1

0

I fixed it, but I can't honestly tell you how.

Basically I booted into a LiveUSB version of Mint 17. I noticed the raid array was happy, so I mounted the system and chrooted into it.

I then installed dmraid again and mdadm (don't know why I did that), updated my grub settings and installed grub to the array.

A reboot later, it complained about mdadm, but all is well and it's booting from the array now.

Quite a surprise really. Thank you for all your help.

wkdmarty
  • 251
  • 2
  • 6