(This correct answer is to one that bears little surface resemblance to this one. Asking the question and answering it myself with reference to that one.)
Consider a virtual host environment in which a VM has been assigned 3 separate virtio drives, appearing as 3 separate block devices, and assigned to 3 separate VGs (volume groups).
- os -
/dev/vda- vgroot - db -
/dev/vdb- vgdata - var -
/dev/vdc- vgvar
While the VM is online, /dev/vdb goes away. This could be due to the VM being moved to a new hypervisor but that particular volume got stuck, or perhaps the insane system administrator removed the volume and put it on another host temporarily. In my case, I was feeling lucky.
When the volume comes back, the Linux Kernel (not all, actually; but at least since RHEL6) assigns it not to its original drive letter, because that disk is technically seen as 'open', but to a new block device: /dev/vdd.
Afterwards, all the LVM commands, such as vgs, report :
/dev/data/db: read failed after 0 of 4096 at 10733158400: Input/output error
/dev/data/db: read failed after 0 of 4096 at 10733215744: Input/output error
/dev/data/db: read failed after 0 of 4096 at 0: Input/output error
/dev/data/db: read failed after 0 of 4096 at 4096: Input/output error
However, pvscan and vgscan detect the original volumes, but they're still trying to read the old block device. Re-mounting doesn't work. And for the sake of argument, rebooting is unacceptable. What to do?