I bumped into the issue two days ago about RAID issue on my TS-251+ QNAP. I do not really bother the box until the problem comes. I explored details to find out what type of RAID, how RAID/LVM construct, etc. Device configure with MDADM raid protection software, LVM, DRBD. What I do not understand what the output locates below.
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdc3[1] sda3[0]
2920311616 blocks super 1.0 [2/2] [UU]
..snipped..
[~] # lvs -a -o +devices
Found duplicate PV zHn9BjXkuAp8o1dkahbrsfhfQPvKMXb1: using /dev/drbd1 not /dev/md1
Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert Devices
lv1 vg288 -wi-ao---- 2.69t
/dev/drbd1(7129)
lv544 vg288 -wi------- 27.85g
/dev/drbd1(0)
[~] # blkid | grep 1471da3c-5ef3-47a3-96f5-7d93367d8fa0
/dev/mapper/cachedev1: LABEL="DataVol1" UUID="1471da3c-5ef3-47a3-96f5-7d93367d8fa0" TYPE="ext4"
/dev/mapper/vg288-lv1: LABEL="DataVol1" UUID="1471da3c-5ef3-47a3-96f5-7d93367d8fa0" TYPE="ext4"
I have duplicated UUID from two mapper devices that I do not quite sure what it is. I cannot find the way how I can replicate this output on my Linux box. Just for education purpose if you may question why I posted.
Also if you may notice, my NAS configure with DRBD which is actually offline. I do not know what is the main purpose for DRBD in my standalone NAS. Perhaps, it does if it is pulled to use in the cluster somewhere.
[~] # cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by
@U16BuildServer104, 2018-05-28 04:25:18, HA:disabled
'1': cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r----s sync'ed:0.0%
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2920310784
~Boonchu/Thailand