VM Snapshot and Restore¶
VM snapshot allows you to snapshot the running VM with existing configuration and restore back to configuration point.
Snapshot a VM¶
Snapshotting a VM is supported for online and offline VMs.
When snapshotting a running VM the controller will check for the qemu guest agent in the VM. If the agent exists it will freeze the VM filesystems before taking the snapshot and unfreeze after the snapshot. It is recommended to take online snapshots with the guest agent for a better snapshot, if not present, a best effort snapshot will be taken.
Procedure
Note
It is necessary that the CRDs and snapshot-controller running pod are present in the system to create the Volume Snapshot Class. The CRDs and snapshot-controller are created by default during installation when running the bootstrap playbook.
Create
VolumeSnapshotClass
forcephfs
andrbd
:Set the snapshotClass.create field to true for
cephfs-provisioner
.~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True +----------------+--------------------+ | Property | Value | +----------------+--------------------+ | name | cephfs-provisioner | | namespace | kube-system | | user_overrides | snapshotClass: | | | create: true | | | | +----------------+--------------------+
Set the snapshotClass.create field to true for
rbd-provisioner
.~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True +----------------+-----------------+ | Property | Value | +----------------+-----------------+ | name | rbd-provisioner | | namespace | kube-system | | user_overrides | snapshotClass: | | | create: true | | | | +----------------+-----------------+
Run the application-apply command to apply the overrides.
~(keystone_admin)$ system application-apply platform-integ-apps +---------------+--------------------------------------+ | Property | Value | +---------------+--------------------------------------+ | active | True | | app_version | 1.0-65 | | created_at | 2024-01-08T18:15:07.178753+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | platform-integ-apps-fluxcd-manifests | | name | platform-integ-apps | | progress | None | | status | applying | | updated_at | 2024-01-08T18:39:10.251660+00:00 | +---------------+--------------------------------------+
After a few seconds, confirm the creation of the Volume Snapshot Class.
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io AME DRIVER DELETIONPOLICY AGE cephfs-snapshot cephfs.csi.ceph.com Delete 40s rbd-snapshot rbd.csi.ceph.com Delete 40s
Create
VolumeSnapshotClasses
forcephfs
provisioner.cat <<EOF>cephfs-snapshotclass.yaml --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: cephfs-snapshot driver: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph deletionPolicy: Delete EOF kubectl apply -f cephfs-snapshotclass.yaml
Create
VolumeSnapshotClass
forrbd
provisioner.cat <<EOF>rbd-snapshotclass.yaml --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: rbd-snapshot driver: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph deletionPolicy: Delete EOF kubectl apply -f rbd-snapshotclass.yaml
After a few seconds, confirm the creation of the Volume Snapshot Class.
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io NAME DRIVER DELETIONPOLICY AGE cephfs-snapshot rook-ceph.cephfs.csi.ceph.com Delete 109m rbd-snapshot rook-ceph.rbd.csi.ceph.com Delete 109m
Create snapshot manifest of running VM using the example yaml below:
cat<<EOF>cirros-snapshot.yaml apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: snap-cirros spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: pvc-test-vm failureDeadline: 3m EOF
Note
Make sure to replace the NAME field with the name of the VM to take snapshot as shown in the output of kubectl get vm.
Apply the snapshot manifest and verify if the snapshot is successfully created.
kubectl apply -f cirros-snapshot.yaml [sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineSnapshot NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR snap-cirros VirtualMachine pvc-test-vm Succeeded true 28m
Example manifest to restore the snapshot:
cat <<EOF>cirros-restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
name: restore-cirros
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: pvc-test-vm
virtualMachineSnapshotName: snap-cirros
EOF
kubectl apply -f cirros-restore.yaml
Verify the snapshot restore:
[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineRestore
NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR
restore-cirros VirtualMachine pvc-test-vm true 34m