1

I have an qcow2 image and want to attach another disk to it.

# create new qcow2 disk
qemu-img create -f qcow2 vm-disk2 500G

Then I attach it as sdb:

virsh attach-disk myvm /var/lib/libvirt/images/vm-disk2 sdb --persistent --live --subdriver qcow2

Then reboot myvm, I don't see sdb in output:

sudo fdisk -l | grep '^Disk /dev/sd[a-z]'
#output: empty

But if I attach-disk with name vdb as below:

virsh attach-disk myvm /var/lib/libvirt/images/vm-disk2 vdb --persistent --live --subdriver qcow2

Then issue command after reboot:

sudo fdisk -l | grep '^Disk /dev/vd[a-z]'
#output:
# Disk /dev/vda: 42.2 GiB, 45311066112 bytes, 88498176 sectors
# Disk /dev/vdb: 500 GiB, XXXXXXXXX bytes, YYYYYYYYY sectors

Why I can't use sd* for attached disk? How to use sd* when attach disk to an kvm vm?

Tuyen Pham
  • 1,765
  • 1
  • 16
  • 46
  • why do you want to use an emulated scsi bus when virtio is faster and better? see https://wiki.libvirt.org/page/Virtio – cas Nov 10 '19 at 10:45
  • @cas : I need to convert image to `raw` and use it as aws AMI image for new ec2 instances, when adding additional storage come with aws, only `sd*` avail. – Tuyen Pham Nov 10 '19 at 10:57
  • 1. if you want a raw image, why create a qcow2? 2. what has any of that got to do with whether the drive is vda or sda inside the VM? 3. why are you talking about ec2 instances when you're running kvm? either you haven't explained what it is you're actually trying to do or you're terribly confused. or both. – cas Nov 10 '19 at 13:07
  • I use qcow2 as in local because it has some advances over raw file, I can't import qcow2 into aws to use it as AMI image, so I have to convert it to raw -- goolge it. After import into aws, and create an ec2 instance using uploaded raw image, aws only allow to choose `sd*` as additional storage, no `vd*`, so if in qcow2 image, I have a line in `/etc/fstab`: `/dev/sdb1 /mnt ext4 rw 0 2`, it won't work, here I need to mount an external storage into image before upload it to aws. If I use `vd*` then aws won't understand it. @cas – Tuyen Pham Nov 10 '19 at 15:10
  • 1. are those advances in any way relevant to building an image on kvm for later use on aws? almost certainly not (BTW, I don't need to "google it", I know it - I used to do similar years ago, building images on kvm on my workstation for use on openstack). 2. what AWS requires is completely irrelevant to KVM - just edit /etc/fstab immediately before uploading the raw file to ec2. AWS is a different environment to KVM, there's sure to be several trivial changes needed, this is just one of them. – cas Nov 11 '19 at 03:37
  • 3. or, you know, do something even more obvious and use UUID= or LABEL= in /etc/fstab becase, as has been documented for well over a decade now, disk device names like /dev/sda or /dev/vda are **not** guaranteed to survive across reboots. They might, and often do, but you can not safely rely on that. – cas Nov 11 '19 at 03:39
  • I want to work on local and bring it on aws without fixing the image but I still need to fix `uuid` anyway. Thanks for the help. @cas – Tuyen Pham Nov 11 '19 at 04:19

0 Answers0