7

Can anyone help ? I have 2 disks spanning my main partitions. 1 is 460Gb and the other is a 1TB. I would like to remove the 1TB - I would like to use it in another machine.

The volume group isn't using a lot of space anyway, I only have docker with a few containers using that disk and my docker container volumes are on a different physical disk anyway.

If I just remove the disk ([physically]), it is going to cause problems right?

Here is some info


pvdisplay


  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               ubuntu-vg
  PV Size               <464.26 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              118850
  Free PE               0
  Allocated PE          118850
  PV UUID               DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4

  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               ubuntu-vg
  PV Size               931.51 GiB / not usable 4.69 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               0
  Allocated PE          238466
  PV UUID               Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU

LVM confuses me a little :-)

Is there not just a simple case of saying,

"remove yourself from the VG and assing anything you are using the remaining group member" ?

Its worth noting that the 1TB was added afterwards, so assume its easier to remove ?

Any help really appreciated

EDIT

Also some more info

df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                                16G     0   16G   0% /dev
tmpfs                              3.2G  1.4M  3.2G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  1.4T  5.1G  1.3T   1% /

It sames its using only 1%

also output of lvs

lvs
  LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- 1.36t

EDIT

pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               ubuntu-vg
  PV Size               <464.26 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              118850
  Free PE               0
  Allocated PE          118850
  PV UUID               DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4

  --- Physical Segments ---
  Physical extent 0 to 118849:
    Logical volume  /dev/ubuntu-vg/ubuntu-lv
    Logical extents 0 to 118849

  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               ubuntu-vg
  PV Size               931.51 GiB / not usable 4.69 MiB
  Allocatable           NO
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               0
  Allocated PE          238466
  PV UUID               Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU

  --- Physical Segments ---
  Physical extent 0 to 238465:
    Logical volume  /dev/ubuntu-vg/ubuntu-lv
    Logical extents 118850 to 357315

EDIT

Output of

lsblk -f
NAME   FSTYPE     LABEL UUID                                   MOUNTPOINT
loop0  squashfs                                                /snap/core/9066
loop2  squashfs                                                /snap/core/9289
sda
├─sda1 vfat             E6CC-2695                              /boot/efi
├─sda2 ext4             0909ad53-d6a7-48c7-b998-ac36c8f629b7   /boot
└─sda3 LVM2_membe       DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4
  └─ubuntu--vg-ubuntu--lv
       ext4             b64f2bf4-cd6c-4c21-9009-76faa2627a6b   /
sdb
└─sdb1 LVM2_membe       Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU
  └─ubuntu--vg-ubuntu--lv
       ext4             b64f2bf4-cd6c-4c21-9009-76faa2627a6b   /
sdc    xfs              1a9d0e4e-5cec-49f3-9634-37021f65da38   /gluster/bricks/2

sdc above is a different drive - and not related.

Mark Smith
  • 173
  • 1
  • 1
  • 5
  • Your `pvdisplay`seems to indicate that both PVs are quite full, as far as LVM is concerned. If they contain any LVs that are not being used for anything, you should remove them with `lvremove` to gain some room to manoeuver. Or if you have large but mostly empty filesystems, see if it's possible to shrink them and their LVs (depends on filesystem type). – telcoM Jun 07 '20 at 13:09
  • thanks. lvremove ubuntu-vg Logical volume ubuntu-vg/ubuntu-lv contains a filesystem in use. – Mark Smith Jun 07 '20 at 14:02
  • Its strange - well for me anyway, as I know there isn't much space in use. Its just a simple ubuntu and also a small docker install - i am certain it will fit onto 460gb (first drive) – Mark Smith Jun 07 '20 at 14:03
  • What does the `lvs` command say? (please edit it into your question) – telcoM Jun 07 '20 at 14:29
  • Updated the question with some info, also with lvs output. It seems to think its only using 1% – Mark Smith Jun 07 '20 at 14:47
  • @telcoM actually you can see that 1.4T is the sum of hte first and second drive - (2nd drive is the one i want to remove) - and it appears its only using 5.1G - which will easily fit on the first drive completely - right ? – Mark Smith Jun 07 '20 at 14:48
  • Add the output of `lsblk -f` to your question. – Nasir Riley Jun 07 '20 at 15:33
  • You have an 1.4T filesystem that is mostly empty, but at the LVM level, 100% of the disk capacity has been allocated into making up that filesystem. You'll need to shrink that filesystem before you can do anything else at the LVM level. And for that, the filesystem type is a vital thing to know - for example, there is no production-grade tool for shrinking a XFS filesystem as far as I know. – telcoM Jun 07 '20 at 15:58
  • Thanks @telcoM - added to question – Mark Smith Jun 07 '20 at 16:56
  • its a ext4 as far as I can see. I didn't do anything special – Mark Smith Jun 07 '20 at 16:58
  • @nasirRiley Updated question with lsblk -f – Mark Smith Jun 07 '20 at 17:09

2 Answers2

8

Since the filesystem you'll need the disk removed from is your root filesystem, and the filesystem type is ext4, you'll have to boot the system from some live Linux boot media first. Ubuntu Live would probably work just fine for this.

Once booted from the external media, run sudo vgchange -ay ubuntu-vg to activate the volume group so that you'll be able to access the LVs, but don't mount the filesystem: ext2/3/4 filesystems need to be unmounted for shrinking. Then shrink the filesystem to 10G (or whatever size you wish - it can easily be extended again later, even on-line):

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 10G

Pay attention to the messages output by resize2fs - if it says the filesystem cannot be shrunk that far, specify a bigger size and try again.

This is the only step that needs to be done while booted on the external media; for everything after this point, you can boot the system normally.

At this point, the filesystem should have been shrunk to 10G (or whatever size you specified). The next step is to shrink the LV. It is vitally important that the new size of the LV should be exactly the same or greater than the new size of the filesystem! You don't want to cut off the tail end of the filesystem when shrinking the LV. It's safest to specify a slightly bigger size here:

sudo lvreduce -L 15G /dev/mapper/ubuntu--vg-ubuntu--lv

Now, use pvdisplay or pvs to see if LVM now considers /dev/sdb1 totally free or not. In pvdisplay, the Total PE and Free PE values for sdb1 should be equal - in pvs output, the PFree value should equal PSize respectively. If this is not the case, then it will be time to use pvmove:

sudo pvmove /dev/sdb1

After this, the sdb1 PV should definitely be totally free according to LVM and it can be reduced out of the VG.

sudo vgreduce vg-ubuntu /dev/sdb1

If you wish, you can then remove the LVM signature from the ex-PV:

sudo pvremove /dev/sdb1

But if you are going to overwrite it anyway, you can omit this step.

After these steps, the shrunken filesystem will still be sized at 10G (or whatever you specified) even though the LV might be somewhat bigger than that. To fix that:

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

When extending a filesystem, you don't have to specify a size: the tool will automatically extend the filesystem to match the exact size of the innermost device containing it. In this case, the filesystem will be sized according to the size of the LV.

Later, if you wish to extend the LV+filesystem, you can do it with just two commands:

sudo lvextend -L <new size> /dev/mapper/ubuntu--vg-ubuntu--lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

You can do this even while the filesystem is in use and mounted. Because shrinking a filesystem is harder than extending it, it might be useful to hold some amount of unallocated space in reserve at the LVM level - you will be able to use it at a moment's notice to create new LVs and/or to extend existing LVs in the same VG as needed.

telcoM
  • 87,318
  • 3
  • 112
  • 232
2

Add your comment to your question.

If you run vgreduce ubuntu-vg /dev/sdb1, and it gives the message /dev/sdb is still in use then that means that there is data on it and that you can't remove it without causing issues.

Otherwise, it will successfully remove it from the volume group and you can then run pvremove /dev/sdb1 to remove the LVM labels from it and then remove the disk from the machine and use it elsewhere.

You can use pvmove /dev/sdb1 but if you get No extents available for allocation, that could mean that there simply aren't any free areas on the disk to move it.

If you run pvdisplay -m, then you can see the mapping data for the physical volumes including the physical extents. For example, if you run it and see FREE under Physical Extents, then you can run pvmove -v /dev/sdb1:<[physical_extent_with_data> /dev/sda3/:<physical_extent_free> --alloc anywhere. In your case, it doesn't look like it's going to work because the output of pvdisplay is showing that they are full which is why you are getting the No extents available for allocation message.

Before you do any of this, make sure that you have backed up your data. It's looking like your going to have start all over again if you want to remove that disk unless you can use lvreduce. In the future, I recommend creating multiple volume groups so that you only have to rebuild the one with the system installation.

Nasir Riley
  • 10,665
  • 2
  • 18
  • 27