KVM / LibVirt snapshot handling
KVM / LibVirt snapshot handling
On a hypervisor that I had installed qemu-kvm qemu-img libvirt virt-install libvirt-client and libguestfs-tools on I virt-installed a VM with a raw disk so I could do an external snapshot allowing me to copy the base IMG file with a backup script while the machine runs. For the purposes of filesystem quiescence, I installed the qemu-guest-agent on that VM.
When I tried to do a sudo virsh snapshot-create-as vm1 snap1 --disk-only --quiesce I got the “operation not supported: live disk snapshot not supported with this qemu binary” message. I found a thread on this forum indicating I needed to do yum install centos-release-qemu-ev followed by yum install qemu-kvm-ev which upgraded the qemu-kvm package and a couple others in place as the thread indicated it would.
So when I ran my snapshot-create-as command again, my snapshot was dutifully created. I saw the snap file show up on the hypervisor’s disk as well as in the VM’s XML file. Yay!
Since the purpose of this is for backups, my next step was to delete the snapshot after copying the base file somewhere else. I tried this with sudo virsh snapshot-delete vm1 snap1 but I got a "failed to delete snapsho"t message with "unsupported configuration: deletion of 1 external disk snapshot not supported yet". Thinking maybe I screwed something up and I’ll try it again, I did a sudo virsh snapshot-revert vm1 snap1 but that gave me "unsupported configuration: revert to external snapshot not supported yet".
Does anyone know how to revert or delete external snapshots in CentOS 7? I’m open to using other commands within virsh (blockpull and blockcommit appear to be useful, but all the examples I’ve seen are collapsing snapshots in the middle of a chain, not taking a base and a snap and merging them) or other tools entirely (is there a way to do this with qemu-img that doesn’t involve shutting the machines down? If not, that sort of negates the purpose of the live snapshots for backups in the first place).
I’m also open to moving to qcow2 disk files and doing internal snapshots, but I worry how that would affect my backups (or is the internal snapshot going to just be useless / ignored crap on the end of the disk if I ever needed to restore a machine from that internally snapped disk file that I could clean up some other way, should it be needed?), but I will if that’s what it takes. There are a variety of experiments I can try, but I suppose I’m looking to see what has worked for other people, first.
Any suggestions would be appreciated.
Thanks,
Scott
When I tried to do a sudo virsh snapshot-create-as vm1 snap1 --disk-only --quiesce I got the “operation not supported: live disk snapshot not supported with this qemu binary” message. I found a thread on this forum indicating I needed to do yum install centos-release-qemu-ev followed by yum install qemu-kvm-ev which upgraded the qemu-kvm package and a couple others in place as the thread indicated it would.
So when I ran my snapshot-create-as command again, my snapshot was dutifully created. I saw the snap file show up on the hypervisor’s disk as well as in the VM’s XML file. Yay!
Since the purpose of this is for backups, my next step was to delete the snapshot after copying the base file somewhere else. I tried this with sudo virsh snapshot-delete vm1 snap1 but I got a "failed to delete snapsho"t message with "unsupported configuration: deletion of 1 external disk snapshot not supported yet". Thinking maybe I screwed something up and I’ll try it again, I did a sudo virsh snapshot-revert vm1 snap1 but that gave me "unsupported configuration: revert to external snapshot not supported yet".
Does anyone know how to revert or delete external snapshots in CentOS 7? I’m open to using other commands within virsh (blockpull and blockcommit appear to be useful, but all the examples I’ve seen are collapsing snapshots in the middle of a chain, not taking a base and a snap and merging them) or other tools entirely (is there a way to do this with qemu-img that doesn’t involve shutting the machines down? If not, that sort of negates the purpose of the live snapshots for backups in the first place).
I’m also open to moving to qcow2 disk files and doing internal snapshots, but I worry how that would affect my backups (or is the internal snapshot going to just be useless / ignored crap on the end of the disk if I ever needed to restore a machine from that internally snapped disk file that I could clean up some other way, should it be needed?), but I will if that’s what it takes. There are a variety of experiments I can try, but I suppose I’m looking to see what has worked for other people, first.
Any suggestions would be appreciated.
Thanks,
Scott
Re: KVM / LibVirt snapshot handling
Thought I’d follow up on this in case anyone runs into issues with snapshot management before they get snapshot-delete and snapshot-revert working. Deletion feels like a pretty solid process and reversion works, but feels kind of like a kludge. However, I’ll only use that for my patch process and, hopefully, only very rarely. I’d be interested in hearing if anyone has any improvements, but in the meantime I’ve tested these and they seem to work.
Here are the pages I found most useful during my research in constructing these processes. None of them was the answer on their own, but isn’t that just the way with figuring out stuff from the Internet.
https://www.redhat.com/archives/libvirt ... 00042.html
https://wiki.libvirt.org/page/Live-disk ... lockcommit
https://linux.die.net/man/1/virsh
https://access.redhat.com/documentation ... king-chain
https://access.redhat.com/documentation ... th_libvirt
https://wiki.libvirt.org/page/I_created ... vert_to_it
https://wiki.libvirt.org/page/Live-merg ... ctive-disk
So, I’m using raw images via virtio for performance, but since that’s not a COW format, the snapshot will be qcow2 for as long as it’s needed. When I start the process and run a dumpxml vm1 from virsh, the disk section looks like this:
domblklist vm1 show this (the hda is the virtual cdrom):
And snapshot-list vm1 shows no snapshots:
And the images folder on disk also contains no snapshots:
So I take my snapshot:
And when I do a dumpxml on vm1 I see the snapshot file with the original disk image as backup as the only change:
And domblklist vm1 shows the new snap in charge:
And a snapshot in the list:
When I look at the snapshot XML, I see the information about the snapshot followed by what looks like the domain XML file before the snapshot was taken:
THEN A RELIST OF THE ORIGINAL XML BEFORE SNAPSHOT
And now on the host disk I can see the snapshot file and the related xml:
I then installed httpd to make sure when I delete the snapshot I got all the new disk data.
So now let’s try to delete the snapshot, leaving me, once again, with a single raw disk file.
But, as mentioned in the OP, it doesn’t work. So after reviewing a bunch of stuff on-line, I decided this was the command I needed:
When I did a dumpxml on the domain, I saw the disk section had returned to it’s original state:
As did the domain block device list:
But there was still a snapshot, according to virsh:
And it’s still on disk:
As is the snapshot XML file:
So it turns out after a blockcommit, none of the other snapshot cleanup work is done, so I have to do it myself. So, in virsh, remove the snapshot metadata:
And the XML file is gone from the host disk:
But the snapshot file, itself, is not:
One last check to make sure the main disk image is in use, but the snapshot is not:
And I delete it.
And make sure it’s gone.
After that, I reconnect to the guest and when I try to install httpd again, the system already has it.
Great! That’s the process for after full image backups and patching where I want to keep the changes.
Since I will also want to user this process to revert after bad changes (most likely patching gone wrong) I removed httpd from vm1 and then returned to the host to create another snapshot for testing reversion. Everything looks just like it did the last time I took a snapshot, except the snapshot name:
So, let’s install httpd again:
And we’ll try the snapshot-revert we know won’t work:
So now the question becomes how to roll back. I did a lot of looking into blockpull trying to figure out how to make that do what I needed to, but after a while ramming my head into a wall, I decided that the answer would be to just make it like the snapshot never happened by removing it from the chain and deleting it from disk.
So I shut down the guest (while I need deletions to happen on a live system, rollbacks will be rare enough that I am OK with doing this on a shut down system):
I removed the actual snapshot file and the xml file:
I went back into virsh and remove the snapshot metadata:
I then did a virsh edit vm1 and changed the disk section to look like it did originally. I was bemused to find out the default editor is vi, but I managed to muddle through. The 3 main changes needed were to return the driver type to ‘raw’, the source file to the original .img file and the remove the backingStore element and it’s child elements leaving behind a <backingStore/> self-closing tag. Mine ended up looking like this when I was done:
So I fired up the guest again:
I was expecting some disk inconsistency errors as this is a sync’ed but still a “blink out of existence” disk state that did not go through a graceful shutdown, but if there were errors, I wasn’t able to catch them during boot.
Now the moment of truth. Httpd should not be installed because that took place after the snapshot was taken:
And it was not. Reversion successful.
So, in summary, the processes are:
Take a live external snapshot with (this requires qemu-guest-agent):
As long as everything is Jake once you’re done your backup, patching, etc… delete the snapshot by:
If things went badly with whatever process you needed the snapshot to protect yourself against, revert to the original state by:
Watch for disk errors and confirm any changes that happened after the snapshot was taken are no longer there.
Looking back on what I’ve written, it appears I’ve gone overboard on the documentation, but I hope someone, someday, finds it useful. Once snapshot-delete and snapshot-revert start working, though, this whole thing will be obsolete.
Here are the pages I found most useful during my research in constructing these processes. None of them was the answer on their own, but isn’t that just the way with figuring out stuff from the Internet.
https://www.redhat.com/archives/libvirt ... 00042.html
https://wiki.libvirt.org/page/Live-disk ... lockcommit
https://linux.die.net/man/1/virsh
https://access.redhat.com/documentation ... king-chain
https://access.redhat.com/documentation ... th_libvirt
https://wiki.libvirt.org/page/I_created ... vert_to_it
https://wiki.libvirt.org/page/Live-merg ... ctive-disk
So, I’m using raw images via virtio for performance, but since that’s not a COW format, the snapshot will be qcow2 for as long as it’s needed. When I start the process and run a dumpxml vm1 from virsh, the disk section looks like this:
Code: Select all
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='writethrough'/>
<source file='/var/lib/libvirt/images/vm1.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
Code: Select all
virsh # domblklist vm1
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/vm1.img
hda -
Code: Select all
virsh # snapshot-list vm1
Name Creation Time State
------------------------------------------------------------
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
drwx--x--x. 3 root root 4096 Sep 1 20:39 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 53687091200 Sep 8 17:53 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
Code: Select all
virsh # snapshot-create-as vm1 ForDeletion.snap --disk-only --quiesce --atomic
Domain snapshot ForDeletion.snap created
Code: Select all
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writethrough'/>
<source file='/var/lib/libvirt/images/vm1.ForDeletion.snap'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/lib/libvirt/images/vm1.img'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
Code: Select all
virsh # domblklist vm1
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/vm1.ForDeletion.snap
hda -
Code: Select all
virsh # snapshot-list vm1
Name Creation Time State
------------------------------------------------------------
ForDeletion.snap 2019-09-08 18:13:32 +0000 disk-snapshot
Code: Select all
virsh # snapshot-dumpxml vm1 ForDeletion.snap
<domainsnapshot>
<name>ForDeletion.snap</name>
<state>disk-snapshot</state>
<creationTime>1567966412</creationTime>
<memory snapshot='no'/>
<disks>
<disk name='vda' snapshot='external' type='file'>
<driver type='qcow2'/>
<source file='/var/lib/libvirt/images/vm1.ForDeletion.snap'/>
</disk>
<disk name='hda' snapshot='no'/>
</disks>
And now on the host disk I can see the snapshot file and the related xml:
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
drwx--x--x. 3 root root 4096 Sep 8 18:13 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 1638400 Sep 8 18:14 vm1.ForDeletion.snap
-rw-------. 1 qemu qemu 53687091200 Sep 8 18:13 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/qemu/snapshot/vm1
drwxr-xr-x. 2 root root 4096 Sep 8 18:13 .
drwxr-xr-x. 3 qemu qemu 4096 Sep 1 17:54 ..
-rw-------. 1 root root 4472 Sep 8 18:13 ForDeletion.snap.xml
Code: Select all
[sawozny@vm1 ~]$ sudo yum install httpd
[sudo] password for sawozny:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 17 kB 00:00
* base: mirror.es.its.nyu.edu
* epel: mirror.math.princeton.edu
* extras: mirror.es.its.nyu.edu
* updates: mirror.es.its.nyu.edu
base | 3.6 kB 00:00
epel | 5.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/2): epel/x86_64/updateinfo | 1.0 MB 00:00
(2/2): epel/x86_64/primary_db | 6.8 MB 00:01
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-89.el7.centos.1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
httpd x86_64 2.4.6-89.el7.centos.1 updates 2.7 M
Transaction Summary
================================================================================
Install 1 Package
Total download size: 2.7 M
Installed size: 9.4 M
Is this ok [y/d/N]: y
Downloading packages:
httpd-2.4.6-89.el7.centos.1.x86_64.rpm | 2.7 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : httpd-2.4.6-89.el7.centos.1.x86_64 1/1
Verifying : httpd-2.4.6-89.el7.centos.1.x86_64 1/1
Installed:
httpd.x86_64 0:2.4.6-89.el7.centos.1
Complete!
Code: Select all
virsh # snapshot-delete vm1 ForDeletion.snap
error: Failed to delete snapshot ForDeletion.snap
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
Code: Select all
virsh # blockcommit vm1 vda --active --wait --verbose --pivot
Block commit: [100 %]
Successfully pivoted
Code: Select all
virsh # dumpxml vm1
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='writethrough'/>
<source file='/var/lib/libvirt/images/vm1.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
Code: Select all
virsh # domblklist vm1
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/vm1.img
hda -
Code: Select all
virsh # snapshot-list vm1
Name Creation Time State
------------------------------------------------------------
ForDeletion.snap 2019-09-08 18:13:32 +0000 disk-snapshot
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
[sudo] password for sawozny:
drwx--x--x. 3 root root 4096 Sep 8 18:13 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 152764416 Sep 8 19:10 vm1.ForDeletion.snap
-rw-------. 1 qemu qemu 53687091200 Sep 8 19:11 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/qemu/snapshot/vm1
drwxr-xr-x. 2 root root 4096 Sep 8 18:13 .
drwxr-xr-x. 3 qemu qemu 4096 Sep 1 17:54 ..
-rw-------. 1 root root 4472 Sep 8 18:13 ForDeletion.snap.xml
Code: Select all
virsh # snapshot-delete vm1 --metadata ForDeletion.snap
Domain snapshot ForDeletion.snap deleted
virsh # snapshot-list vm1
Name Creation Time State
------------------------------------------------------------
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/qemu/snapshot/vm1
drwxr-xr-x. 2 root root 4096 Sep 8 19:19 .
drwxr-xr-x. 3 qemu qemu 4096 Sep 1 17:54 ..
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
drwx--x--x. 3 root root 4096 Sep 8 18:13 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 152764416 Sep 8 19:10 vm1.ForDeletion.snap
-rw-------. 1 qemu qemu 53687091200 Sep 8 19:18 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
Code: Select all
[sawozny@kvm1 etc]$ sudo lsof | grep vm1.img
qemu-kvm 9652 qemu 16u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
qemu-kvm 9652 qemu 17u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
qemu-kvm 9652 9679 qemu 16u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
qemu-kvm 9652 9679 qemu 17u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
IO 9652 9681 qemu 16u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
IO 9652 9681 qemu 17u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
CPU 9652 9682 qemu 16u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
CPU 9652 9682 qemu 17u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
CPU 9652 9684 qemu 16u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
CPU 9652 9684 qemu 17u REG 8,3 53687091200 12 /var/lib/libvirt/images/vm1.img
[sawozny@kvm1 etc]$ sudo lsof | grep vm1.ForDeletion.snap
Code: Select all
[sawozny@kvm1 etc]$ sudo rm /var/lib/libvirt/images/vm1.ForDeletion.snap
Code: Select all
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
drwx--x--x. 3 root root 4096 Sep 8 19:24 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 53687091200 Sep 8 19:18 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
Code: Select all
[sawozny@vm1 ~]$ sudo yum install httpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: repos-va.psychz.net
* epel: epel.mirror.constant.com
* extras: mirror.es.its.nyu.edu
* updates: mirrors.advancedhosters.com
Package httpd-2.4.6-89.el7.centos.1.x86_64 already installed and latest version
Nothing to do
Since I will also want to user this process to revert after bad changes (most likely patching gone wrong) I removed httpd from vm1 and then returned to the host to create another snapshot for testing reversion. Everything looks just like it did the last time I took a snapshot, except the snapshot name:
Code: Select all
virsh # snapshot-create-as vm1 ForReversion.snap --disk-only --quiesce --atomic
Domain snapshot ForReversion.snap created
virsh # dumpxml vm1
JUST KEEPING THE DISK PART
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writethrough'/>
<source file='/var/lib/libvirt/images/vm1.ForReversion.snap'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/lib/libvirt/images/vm1.img'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
virsh # domblklist vm1
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/vm1.ForReversion.snap
hda -
virsh # snapshot-list vm1
Name Creation Time State
------------------------------------------------------------
ForReversion.snap 2019-09-08 19:25:56 +0000 disk-snapshot
virsh # snapshot-dumpxml vm1 ForReversion.snap
<domainsnapshot>
<name>ForReversion.snap</name>
<state>disk-snapshot</state>
<creationTime>1567970756</creationTime>
<memory snapshot='no'/>
<disks>
<disk name='vda' snapshot='external' type='file'>
<driver type='qcow2'/>
<source file='/var/lib/libvirt/images/vm1.ForReversion.snap'/>
</disk>
<disk name='hda' snapshot='no'/>
</disks>
THEN A LISTING OF THE ORIGINAL XML BEFORE SNAPSHOT
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
[sudo] password for sawozny:
drwx--x--x. 3 root root 4096 Sep 8 19:25 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 2752512 Sep 8 19:33 vm1.ForReversion.snap
-rw-------. 1 qemu qemu 53687091200 Sep 8 19:25 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/qemu/snapshot/vm1
drwxr-xr-x. 2 root root 4096 Sep 8 19:25 .
drwxr-xr-x. 3 qemu qemu 4096 Sep 1 17:54 ..
-rw-------. 1 root root 4476 Sep 8 19:25 ForReversion.snap.xml
Code: Select all
[sawozny@vm1 ~]$ sudo yum install httpd
[sudo] password for sawozny:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 17 kB 00:00
* base: mirror.es.its.nyu.edu
* epel: mirror.math.princeton.edu
* extras: mirror.es.its.nyu.edu
* updates: mirror.es.its.nyu.edu
base | 3.6 kB 00:00
epel | 5.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/2): epel/x86_64/updateinfo | 1.0 MB 00:00
(2/2): epel/x86_64/primary_db | 6.8 MB 00:01
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-89.el7.centos.1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
httpd x86_64 2.4.6-89.el7.centos.1 updates 2.7 M
Transaction Summary
================================================================================
Install 1 Package
Total download size: 2.7 M
Installed size: 9.4 M
Is this ok [y/d/N]: y
Downloading packages:
httpd-2.4.6-89.el7.centos.1.x86_64.rpm | 2.7 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : httpd-2.4.6-89.el7.centos.1.x86_64 1/1
Verifying : httpd-2.4.6-89.el7.centos.1.x86_64 1/1
Installed:
httpd.x86_64 0:2.4.6-89.el7.centos.1
Complete!
Code: Select all
virsh # snapshot-revert vm1 ForReversion.snap
error: unsupported configuration: revert to external snapshot not supported yet
So I shut down the guest (while I need deletions to happen on a live system, rollbacks will be rare enough that I am OK with doing this on a shut down system):
Code: Select all
virsh # shutdown vm1
Domain vm1 is being shutdown
Code: Select all
[sawozny@kvm1 etc]$ sudo rm /var/lib/libvirt/images/vm1.ForReversion.snap
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/images
drwx--x--x. 3 root root 4096 Sep 8 19:43 .
drwxr-xr-x. 11 root root 4096 Aug 23 20:28 ..
-rw-------. 1 qemu qemu 53687091200 Sep 8 19:25 vm1.img
drwx------. 2 root root 16384 Jul 19 21:42 lost+found
[sawozny@kvm1 etc]$ sudo rm /var/lib/libvirt/qemu/snapshot/vm1/ForReversion.snap.xml
[sawozny@kvm1 etc]$ sudo ls -la /var/lib/libvirt/qemu/snapshot/vm1
drwxr-xr-x. 2 root root 4096 Sep 8 19:44 .
drwxr-xr-x. 3 qemu qemu 4096 Sep 1 17:54 ..
Code: Select all
virsh # snapshot-delete vm1 --metadata ForReversion.snap
Domain snapshot ForReversion.snap deleted
Code: Select all
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='writethrough'/>
<source file='/var/lib/libvirt/images/vm1.img'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
Code: Select all
virsh # start vm1 --console
Domain vm1 started
Now the moment of truth. Httpd should not be installed because that took place after the snapshot was taken:
Code: Select all
[sawozny@vm1 ~]$ sudo yum install httpd
[sudo] password for sawozny:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 17 kB 00:00
* base: mirror.es.its.nyu.edu
* epel: epel.mirror.constant.com
* extras: centos.mirror.constant.com
* updates: mirror.math.princeton.edu
base | 3.6 kB 00:00
epel | 5.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/2): epel/x86_64/updateinfo | 1.0 MB 00:00
(2/2): epel/x86_64/primary_db | 6.8 MB 00:01
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-89.el7.centos.1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
httpd x86_64 2.4.6-89.el7.centos.1 updates 2.7 M
Transaction Summary
================================================================================
Install 1 Package
Total download size: 2.7 M
Installed size: 9.4 M
Is this ok [y/d/N]:
So, in summary, the processes are:
Take a live external snapshot with (this requires qemu-guest-agent):
Code: Select all
virsh# snapshot-create-as <GUEST_DOMAIN> <SNAP_NAME>.snap --disk-only --quiesce --atomic
Code: Select all
virsh # blockcommit <GUEST_DOMAIN> vda --active --wait --verbose --pivot
virsh # snapshot-delete <GUEST_DOMAIN> --metadata <SNAP_NAME>.snap
[user@kvmhost ~]$ sudo rm /var/lib/libvirt/images/<GUEST_DOMAIN>.<SNAP_NAME>.snap
Code: Select all
virsh # shutdown <GUEST_DOMAIN>
[user@kvmhost ~]$ sudo rm /var/lib/libvirt/images/<GUEST_DOMAIN>.<SNAP_NAME>.snap
[user@kvmhost ~]$ sudo rm /var/lib/libvirt/qemu/snapshot/<GUEST_DOMAIN>/<SNAP_NAME>.snap.xml
virsh # snapshot-delete <GUEST_DOMAIN> --metadata <SNAP_NAME>.snap
virsh # edit <GUEST_DOMAIN>
1) Change driver type from qcow2 back to raw
2) Change source file from snapshot filename to original disk filename
3) Remove the multi-child backingStore element and replace with a self closed <backingStore/>
virsh # start <GUEST_DOMAIN> --console
Looking back on what I’ve written, it appears I’ve gone overboard on the documentation, but I hope someone, someday, finds it useful. Once snapshot-delete and snapshot-revert start working, though, this whole thing will be obsolete.
-
- Posts: 2019
- Joined: 2015/02/17 15:14:33
- Location: Bulgaria
- Contact:
Re: KVM / LibVirt snapshot handling
Thanks for sharing.
Re: KVM / LibVirt snapshot handling
Nice to find a KVM thread
Does your VM's use UEFI or old std. BIOS?. Last time I tried to do a snapshot one of my UEFI VMs I got a message that it was not supported
My performance VM's run directly on LVM logical volumes so I'll try, one day where I get a free moment, to play with the LVM snapshots.
Does your VM's use UEFI or old std. BIOS?. Last time I tried to do a snapshot one of my UEFI VMs I got a message that it was not supported
My performance VM's run directly on LVM logical volumes so I'll try, one day where I get a free moment, to play with the LVM snapshots.
Re: KVM / LibVirt snapshot handling
The VMs are BIOS MBR based upon the lack of a /sys/firmware/efi folder and the lack of response to dmesg | grep EFI in the VMs. I compared those results to the host which definitely IS UEFI because I built it that way to support a pair of 3TB RAID 1 arrays knowing that exceeds the BIOS MBR limit.
I briefly considered forcing the VMs to use UEFI when I was writing my build process, but had no really compelling reason to do so as I'll not exceed the size limitation forcing the host to UEFI and over time I've grown to really think twice before changing a default that has wide implications without a particularly compelling reason.
I really wish snapshot-delete and snapshot-revert would just work, but after the research involved in forming this process, I get why it's not quite that easy.
Scott
I briefly considered forcing the VMs to use UEFI when I was writing my build process, but had no really compelling reason to do so as I'll not exceed the size limitation forcing the host to UEFI and over time I've grown to really think twice before changing a default that has wide implications without a particularly compelling reason.
I really wish snapshot-delete and snapshot-revert would just work, but after the research involved in forming this process, I get why it's not quite that easy.
Scott