VM Snapshot and Restore

VM snapshot allows you to snapshot the running VM with existing configuration and restore back to configuration point.

Snapshot a VM

Snapshotting a VM is supported for online and offline VMs.

When snapshotting a running VM the controller will check for the qemu guest agent in the VM. If the agent exists it will freeze the VM filesystems before taking the snapshot and unfreeze after the snapshot. It is recommended to take online snapshots with the guest agent for a better snapshot, if not present, a best effort snapshot will be taken.

Procedure

Note

It is necessary that the CRDs and snapshot-controller running pod are present in the system to create the Volume Snapshot Class. The CRDs and snapshot-controller are created by default during installation when running the bootstrap playbook.

  1. Create VolumeSnapshotClass for cephfs and rbd:

    Set the snapshotClass.create field to true for cephfs-provisioner.

    ~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True
    +----------------+--------------------+
    | Property       | Value              |
    +----------------+--------------------+
    | name           | cephfs-provisioner |
    | namespace      | kube-system        |
    | user_overrides | snapshotClass:     |
    |                |   create: true     |
    |                |                    |
    +----------------+--------------------+
    

    Set the snapshotClass.create field to true for rbd-provisioner.

    ~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True
    +----------------+-----------------+
    | Property       | Value           |
    +----------------+-----------------+
    | name           | rbd-provisioner |
    | namespace      | kube-system     |
    | user_overrides | snapshotClass:  |
    |                |   create: true  |
    |                |                 |
    +----------------+-----------------+
    

    Run the application-apply command to apply the overrides.

    ~(keystone_admin)$ system application-apply platform-integ-apps
    +---------------+--------------------------------------+
    | Property      | Value                                |
    +---------------+--------------------------------------+
    | active        | True                                 |
    | app_version   | 1.0-65                               |
    | created_at    | 2024-01-08T18:15:07.178753+00:00     |
    | manifest_file | fluxcd-manifests                     |
    | manifest_name | platform-integ-apps-fluxcd-manifests |
    | name          | platform-integ-apps                  |
    | progress      | None                                 |
    | status        | applying                             |
    | updated_at    | 2024-01-08T18:39:10.251660+00:00     |
    +---------------+--------------------------------------+
    

    After a few seconds, confirm the creation of the Volume Snapshot Class.

    ~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
    AME              DRIVER                DELETIONPOLICY   AGE
    cephfs-snapshot   cephfs.csi.ceph.com   Delete           40s
    rbd-snapshot      rbd.csi.ceph.com      Delete           40s
    
  2. Create snapshot manifest of running VM using the example yaml below:

    cat<<EOF>cirros-snapshot.yaml
    apiVersion: snapshot.kubevirt.io/v1alpha1
    kind: VirtualMachineSnapshot
    metadata:
      name: snap-cirros
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: pvc-test-vm
      failureDeadline: 3m
    EOF
    

    Note

    Make sure to replace the NAME field with the name of the VM to take snapshot as shown in the output of kubectl get vm.

  3. Apply the snapshot manifest and verify if the snapshot is successfully created.

    kubectl apply -f cirros-snapshot.yaml
    [sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineSnapshot
    NAME          SOURCEKIND       SOURCENAME    PHASE       READYTOUSE   CREATIONTIME   ERROR
    snap-cirros   VirtualMachine   pvc-test-vm   Succeeded   true         28m
    

Example manifest to restore the snapshot:

cat <<EOF>cirros-restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
  name: restore-cirros
spec:
  target:
    apiGroup: kubevirt.io
    kind: VirtualMachine
    name: pvc-test-vm
  virtualMachineSnapshotName: snap-cirros
EOF
kubectl apply -f cirros-restore.yaml

Verify the snapshot restore:

[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineRestore
NAME               TARGETKIND       TARGETNAME    COMPLETE   RESTORETIME   ERROR
restore-cirros     VirtualMachine   pvc-test-vm   true       34m