In my case lvextend --poolmetadata didn't allow for any changes, as it would ask me to perform a Manual Repair™. Therefore, I will describe what worked in my situation.
In this scenario all the system partitions are in thinly provisioned Logical Volumes and the entire LVM is encrypted inside a LUKS physical partition. Those of you who don't use encryption can skip the cryptsetup command at step 2.
1. Add an extra drive with more storage capacity than your system drive, boot into recovery mode using the Operating System install media and, without mounting any partitions, start a root shell. Try to use a distro with an installer with thin provisioning support and the latest kernel version to increase the chances of success during the recovery process (i.e. Fedora better than CentOS or RedHat).
2. Identify and mount your extra drive, perform a backup image of the entire system and mount the encrypted partition:
$ fdisk -l #Identify drives and partitions
(We will use /dev/sda for the system drive, /dev/sdb for the extra drive and volname for the LUKS encrypted volume name in this example)
$ mount /dev/sdb1 /mnt #Mount the extra drive
$ cd /mnt
$ cat /dev/sda > sda.img #Back up the system drive
$ sha256sum /dev/sda > sda.sum
$ sed -i 's/\/dev\/sda/.\/sda.img/g' sda.sum
$ sha256sum -c sda.sum #Verify backup integrity
$ cryptsetup open /dev/sda2 volname #Mount encrypted LVM
3. At this point some distributions may try to automatically activate the logical volumes, which would keep the lvs, vgs or pvs commands hanging and prevent us from working with the logical volumes. In this step that process is killed and the logical volumes are put into the proper state:
$ ps aux | grep scan #Look for 'lvm pvscan' or similar
root 1234 0.0 0.0 64843 9472 ? Ss 11:11 0:02 /usr/sbin/lvm pvscan -a ...
$ kill -9 1234 #Get the PID and kill it with fire
$ pvscan #Scan without activating volumes
(vg and pool00 correspond to the volume group and thin pool names)
$ lvchange -an vg #Deactivate entire volume group
$ lvchange -pr -ay vg/pool00_tmeta #Activate metadata in readonly mode
4. Now it's time to create a new logical volume for the metadata and repair it:
$ lvs -a --units m | grep pool00_tmeta #Get the current metadata size
[pool00_tmeta] vg ewu-ai---- 128.00m
$ lvcreate -L 256M -n pool00R vg #Create a larger logical volume
$ lvchange -ay vg/pool00R #Activate the new logical volume
$ thin_repair -i /dev/vg/pool00_tmeta -o /dev/vg/pool00R
$ thin_check /dev/vg/pool00R #Verify it's been repaired properly
5. Finally, we replace the metadata logical volume of the thin pool and remove the old one:
$ lvchange -an vg #Deactivate all LVs again
$ lvconvert --thinpool vg/pool00 --poolmetadata vg/pool00R
$ lvs -a --units m | grep pool00_tmeta #Verify the LVs have been swapped
[pool00_tmeta] vg ewu-ai---- 256.00m
$ lvremove vg/pool00R #Get rid of the damaged metadata
If anything went wrong during the process, you can restore your backup from the extra drive like this:
$ cat /mnt/sda.img > /dev/sda