2

I have a lvm where the metadata got full, I didn't understand what was wrong and attempted to fix it with

lvconvert --repair pve/data

This turned out to be a bad idea, as it now says that there is no data for any lv's. Similar to this question LVM: How to recover LVM thin pool / volume after failed repair?, but I never tried to extend metadata, and I cannot get that solution to work.

After the research I have done, it seems like i can restore the metadata from archive. This is a ProxMox install, and so all things are vm-drives. There is a archive file for "before running lvconvert --repair". There is also a file "before lvcreate" for a vmdisk, which I don't care about. And I know that the LVM worked for a few months after I created that disk. So if it is possible to save the rest of the data using that, that is perfectly acceptable. At this point I am trying to mitigate dataloss more than get the pool running as it should.

So, I have cloned the disk to a new (larger) disk using dd to try to be safe here. And I have attempted to restore from archive using a live image with this command

lvchange -an pvm
pvcreate --uuid "z34DFR-Kkk6-4P5m-N1uy-n7dh-sI11-ABuxCl" --restorefile /etc/lvm/archive/pve_00064-1664281480.vg /dev/sda

But that gives

Couldn't find device with uuid z34DFR-Kkk6-4P5m-N1uy-n7dh-sI11-ABuxCl.
UUID z34DFR-Kkk6-4P5m-N1uy-n7dh-sI11-ABuxCl already in use on "/dev/sda3".

Here are some info on the LVM and the disks. /dev/sdb is currently a clone of /dev/sda.

root@frank:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 1999844147200 bytes, 3905945600 sectors
Disk model: PERC 6/i        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EAE945C0-902E-4C35-8544-F5D0BE412BF3

Device       Start        End    Sectors  Size Type
/dev/sda1       34       2047       2014 1007K BIOS boot
/dev/sda2     2048    1050623    1048576  512M EFI System
/dev/sda3  1050624 3905945566 3904894943  1.8T Linux LVM


Disk /dev/sdb: 5.5 TiB, 5999532441600 bytes, 11717836800 sectors
Disk model: PERC 6/i        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1D53AC81-30BC-BF46-BEA9-B61C0FF235DF

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 5859377151 5859375104  2.7T Linux filesystem


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-data_meta0: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@frank:~# pvs -a
  PV         VG  Fmt  Attr PSize  PFree  
  /dev/sda2           ---      0       0 
  /dev/sda3  pve lvm2 a--  <1.82t 211.00g
  /dev/sdb1           ---      0       0 

root@frank:~# vgs -a
  VG  #PV #LV #SN Attr   VSize  VFree  
  pve   1  21   0 wz--n- <1.82t 211.00g
    
root@frank:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz--  <1.44t             0.00   0.15                            
  data_meta0      pve -wi-a-----  15.00g                                                    
  [data_tdata]    pve Twi-ao----  <1.44t                                                    
  [data_tmeta]    pve ewi-ao----  15.00g                                                    
  [lvol1_pmspare] pve ewi-------  15.00g                                                    
  root            pve -wi-ao---- 128.00g                                                    
  swap            pve -wi-ao----   8.00g                                                    
  vm-100-disk-0   pve Vwi---tz--   8.00g data                                               
  vm-201-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-205-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-300-disk-0   pve Vwi---tz--  24.00g data                                               
  vm-301-disk-0   pve Vwi---tz--  32.00g data                                               
  vm-302-disk-0   pve Vwi---tz--  64.00g data                                               
  vm-400-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-401-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-402-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-403-disk-0   pve Vwi---tz-- 128.00g data                                               
  vm-404-disk-0   pve Vwi---tz-- 256.00g data                                               
  vm-506-disk-0   pve Vwi---tz--  32.00g data                                               
  vm-508-disk-0   pve Vwi---tz-- 800.00g data                                               
  vm-508-disk-1   pve Vwi---tz--  24.00g data                                               
  vm-600-disk-0   pve Vwi---tz--  16.00g data                                               
  vm-601-disk-0   pve Vwi---tz--  16.00g data                                               
  vm-602-disk-0   pve Vwi---tz--  32.00g data  
root@frank:~# pvscan
  PV /dev/sda3   VG pve             lvm2 [<1.82 TiB / 211.00 GiB free]
  Total: 1 [<1.82 TiB] / in use: 1 [<1.82 TiB] / in no VG: 0 [0   ]

root@frank:~# lvscan
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [128.00 GiB] inherit
  ACTIVE            '/dev/pve/data' [<1.44 TiB] inherit
  inactive          '/dev/pve/vm-100-disk-0' [8.00 GiB] inherit
  inactive          '/dev/pve/vm-201-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-600-disk-0' [16.00 GiB] inherit
  inactive          '/dev/pve/vm-601-disk-0' [16.00 GiB] inherit
  inactive          '/dev/pve/vm-506-disk-0' [32.00 GiB] inherit
  inactive          '/dev/pve/vm-602-disk-0' [32.00 GiB] inherit
  inactive          '/dev/pve/vm-205-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-300-disk-0' [24.00 GiB] inherit
  inactive          '/dev/pve/vm-508-disk-0' [800.00 GiB] inherit
  inactive          '/dev/pve/vm-508-disk-1' [24.00 GiB] inherit
  inactive          '/dev/pve/vm-400-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-401-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-402-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-403-disk-0' [128.00 GiB] inherit
  inactive          '/dev/pve/vm-301-disk-0' [32.00 GiB] inherit
  inactive          '/dev/pve/vm-302-disk-0' [64.00 GiB] inherit
  inactive          '/dev/pve/vm-404-disk-0' [256.00 GiB] inherit
  ACTIVE            '/dev/pve/data_meta0' [15.00 GiB] inherit

root@frank:~# vgdisplay 
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  125
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                21
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476671
  Alloc PE / Size       422655 / 1.61 TiB
  Free  PE / Size       54016 / 211.00 GiB
  VG UUID               H0G0xv-ddq1-BdvI-fIxF-dg8e-BT2X-CqYWFX

Thank you so much for any answers or tips on this. As I mentioned, data recovery is starting to become first priority. If I am able to provide any more things to make it easier to give tips I will!

jakobS
  • 21
  • 2
  • The interesting part of the question would be how things looked before your tried to repair it, i.e.: *What* did you try to repair? – U. Windl Sep 06 '21 at 22:43
  • The thing is, that was 2 months ago. Then I had to leave the location the server was, so I postponed fixing it when that "repair" did not work. So I don't remember how it actually was. But, all of my "vm-disks" were offline at least, and metadata showed as full. I don't remember exactly how it looked, but after the repair command, I remember that there were a lot fewer numbers displayed when running "lvs -a". Haha, I don't know if this is useful at all – jakobS Sep 07 '21 at 06:56

0 Answers0