I have a 5 year old PR4100 with 4 x 2TB WD red's. It was always configured in the default Raid1 setting. A week ago, my PR4100 showed a red led on the first drive. My files were still accessible and everything was working fine. When running a quick drive test, the dashboard showed that drive 3 was failing. This seemed a bit odd to me so I restarted the NAS.
After the restart, all drives show a red led on the system. My files and shares aren't accessible anymore and it seems as if the drives aren't mounted. The dashboard shows that I need to configure a raid volume again. I obviously don't want to do this as this would wipe the drives. I normally had raid rebuild set on automatic but this function is now missing from the dashboard.
Running a full test again, the system says drive 3 should be replaced. Today I received a new, identical 2TB WD red HDD and replaced drive 3. I somehow hoped that this would magically start rebuilding the raid but the NAS just does...nothing. Doing another drive test, all drives appear healthy although all 4 drives have red led's on the system.
I can still log in to the dashboard, have full acces to SSH but can't acces files in any way. I think it has something to do with the raid configuration or profile missing.
I haven't read or write any files to the HDD's since the errors so I'm hoping all my very personal files on the nas aren't damaged or gone in any way. But I have no idea how to go forward. Manually start rebuilding the raid from SSH? Retrieve or change the missing raid profile? Any help is very much appreciated!
root@MyCloudPR4100 ~ # mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 11 11:13:50 2023
Raid Level : raid1
Array Size : 2094080 (2045.00 MiB 2144.34 MB)
Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Feb 11 11:13:53 2023
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : MyCloudPR4100:0 (local to host MyCloudPR4100)
UUID : c93335dc:dd795ea9:4612ae75:b4b3e02d
Events : 5
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 49 2 active sync /dev/sdd1
- 0 0 3 removed
root@MyCloudPR4100 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdb1[1] sda1[0]
2094080 blocks super 1.2 [4/3] [UUU_]
bitmap: 0/1 pages [0KB], 65536KB chunk
Full kernel info: https://pastebin.com/Du2HJ7kZ
output of mdadm --examine /dev/sd* in this pastebin: https://pastebin.com/u3BA6FuR