I had a drive in my raid5 fail a while back. I believe the problem was due to a power failure at the time, but originally thought it might be the hard drive controllers on the motherboard (this is a system I put together myself).
Since then, I built a replacement system and transferred the drives over and attempted to start them up. What I'm getting now is that one drive is still not good for the system to start.
Here is what I get when trying to assemble:
[root@localhost ~]# mdadm --assemble --force /dev/md0 /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sda1 -v
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 3.
mdadm: added /dev/sde1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sda1 to /dev/md0 as 3 (possibly out of date)
mdadm: no uptodate device for slot 8 of /dev/md0
mdadm: added /dev/sdf1 to /dev/md0 as 0
mdadm: /dev/md0 assembled from 3 drives - not enough to start the array.
When I examine the drives, I get this:
[root@localhost ~]# mdadm --examine /dev/sd[a-z]1
/dev/sda1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 491fdb85:372da78e:8022a675:04a2932c
Name : kenya:0
Creation Time : Wed Aug 21 14:18:41 2013
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Array Size : 7813527552 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 0 sectors
Unused Space : before=262072 sectors, after=1024 sectors
State : clean
Device UUID : 879d0ddf:9f9c91c5:ffb0185f:c69dd71f
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Feb 5 06:05:09 2015
Checksum : 758a6362 - correct
Events : 624481
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdb1.
/dev/sdd1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 491fdb85:372da78e:8022a675:04a2932c
Name : kenya:0
Creation Time : Wed Aug 21 14:18:41 2013
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Array Size : 7813527552 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 0 sectors
Unused Space : before=262072 sectors, after=1024 sectors
State : clean
Device UUID : 3a403437:9a1690ea:f6ce8525:730d1d9c
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Feb 5 06:07:11 2015
Checksum : 355d0e32 - correct
Events : 624485
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 491fdb85:372da78e:8022a675:04a2932c
Name : kenya:0
Creation Time : Wed Aug 21 14:18:41 2013
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Array Size : 7813527552 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 0 sectors
Unused Space : before=262072 sectors, after=1024 sectors
State : clean
Device UUID : 7d7ec5fe:b4b55c4e:4e903357:1aa3bae3
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Feb 5 06:07:11 2015
Checksum : da06428d - correct
Events : 624485
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 491fdb85:372da78e:8022a675:04a2932c
Name : kenya:0
Creation Time : Wed Aug 21 14:18:41 2013
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906764800 (1862.89 GiB 2000.26 GB)
Array Size : 7813527552 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 0 sectors
Unused Space : before=262072 sectors, after=1024 sectors
State : clean
Device UUID : c091025f:8296517b:0237935f:5cc03cfc
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Feb 5 06:07:11 2015
Checksum : 8819fa93 - correct
Events : 624485
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg1:
MBR Magic : aa55
Partition[0] : 808960 sectors at 0 (type 17)
and then there's this:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
unused devices: <none>
I gathered this info from booting into recovery. The system is centos 6.2. I have learned from some irc help that the sda drive is out of sync with the rest of them. I believe the drive which had failed is now listed as sdg, but i'm not certain of that. I also know the order to the drives is now feda (sdf, sde, sdd, sda).
I have a replacement drive for the dead one ready for insertion when I can get the rest of this built. I was originally going to try to list it as removed from the array, but I cannot get that status to take.
My attempt to sign up for and use the linux-raid mailing list have left me wondering if it is even active anymore. ("delivery to [email protected] has failed permanently.") the help from the centos irc channel suggested getting further help from that source. I'm now trying here.
I also have read through this post but wanted to ask in another forum for a more specific opinion before attempting any of the suggestions toward the end of the thread: http://ubuntuforums.org/showthread.php?t=2276699.
If there is a working email thread for mdadm or linux-raid, I'm willing to post there. If more data about this situation can be provided, please let me know.