Create Cephfs Volume Snapshot ClassΒΆ
Volume Snapshot Class for Cephfs provisioner can be created via Helm overrides to support PVC snapshots.
About this task
A Volume Snapshot Class enables the creation of snapshots for PVCs, allowing for efficient backups and data restoration. This functionality ensures data protection, facilitating point-in-time recovery and minimizing the risk of data loss in Kubernetes clusters.
The procedure below demonstrates how to create a Volume Snapshot Class and Volume Snapshot for the Cephfs provisioner.
Note
It is necessary that the CRDs and snapshot-controller
running pod are
present in the system to create the Volume Snapshot Class.
The CRDs and snapshot-controller
are created by default during
installation when running the bootstrap playbook.
Procedure
List installed Helm chart overrides for the
platform-integ-apps
.~(keystone_admin)$ system helm-override-list platform-integ-apps +--------------------+----------------------+ | chart name | overrides namespaces | +--------------------+----------------------+ | ceph-pools-audit | ['kube-system'] | | cephfs-provisioner | ['kube-system'] | | rbd-provisioner | ['kube-system'] | +--------------------+----------------------+
Review existing overrides for the
cephfs-provisioner
chart.~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
Check if
provisioner.snapshotter.enabled
is set to true.~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system +--------------------+------------------------------------------------------+ | Property | Value | +--------------------+------------------------------------------------------+ | attributes | enabled: true | | | | | combined_overrides | ... | | | provisioner: | | | replicaCount: 1 | | | snapshotter: | | | enabled: true | +--------------------+------------------------------------------------------+
True means that the
csi-snapshotter
container is created inside the Cephfs provisioner pod and that the CRDs andsnapshot-controller
with the corresponding Kubernetes version are created.If the value is false, and the CRDs and snapshot controller are present in a later version than what is recommended for Kubernetes on your system, you can update the value via
helm-overrides
and set it totrue
and continue with the creation of the container as follows:Update to
true
viahelm-overrides
.~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set provisioner.snapshotter.enabled=true
Create container.
~(keystone_admin)$ system application-apply platform-integ-apps
Important
To proceed with the creation of the snapshot class and volume snapshot, it is strictly necessary that the
csi-snapshotter
container is created.
Update
snapshotClass.create
totrue
via Helm.~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True
Confirm that the new overrides have been applied to the chart.
~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system +--------------------+------------------------------------------------------+ | Property | Value | +--------------------+------------------------------------------------------+ | attributes | enabled: true | | | | | combined_overrides | classdefaults: | | | adminId: admin | | | adminSecretName: ceph-secret-admin | | | monitors: | | | - 192.168.204.2:6789 | | | csiConfig: | | | - cephFS: | | | subvolumeGroup: csi | | | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e | | | monitors: | | | - 192.168.204.2:6789 | | | provisioner: | | | replicaCount: 1 | | | snapshotter: | | | enabled: true | | | snapshotClass: | | | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e | | | create: true | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | storageClasses: | | | - additionalNamespaces: | | | - default | | | - kube-public | | | chunk_size: 64 | | | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e | | | controllerExpandSecret: ceph-pool-kube-cephfs-data | | | crush_rule_name: storage_tier_ruleset | | | data_pool_name: kube-cephfs-data | | | fs_name: kube-cephfs | | | metadata_pool_name: kube-cephfs-metadata | | | name: cephfs | | | nodeStageSecret: ceph-pool-kube-cephfs-data | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | replication: 1 | | | userId: ceph-pool-kube-cephfs-data | | | userSecretName: ceph-pool-kube-cephfs-data | | | volumeNamePrefix: pvc-volumes- | | | | | name | cephfs-provisioner | | namespace | kube-system | | system_overrides | ... | | | | | user_overrides | snapshotClass: | | | create: true | | | | +--------------------+------------------------------------------------------+
Apply the overrides.
Run the application-apply command.
~(keystone_admin)$ system application-apply platform-integ-apps +---------------+--------------------------------------+ | Property | Value | +---------------+--------------------------------------+ | active | True | | app_version | 1.0-65 | | created_at | 2024-01-08T18:15:07.178753+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | platform-integ-apps-fluxcd-manifests | | name | platform-integ-apps | | progress | None | | status | applying | | updated_at | 2024-01-08T18:39:10.251660+00:00 | +---------------+--------------------------------------+
Monitor progress using the application-list command.
~(keystone_admin)$ system application-list +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+ | platform-integ-apps | 1.0-65 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed | +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
Confirm the creation of the Volume Snapshot Class after a few seconds.
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io NAME DRIVER DELETIONPOLICY AGE cephfs-snapshot cephfs.csi.ceph.com Delete 5s
You can now create Cephfs PVC snapshots.
Consider the Cephfs Volume Snapshot yaml example.
~(keystone_admin)$ cat << EOF > ~/cephfs-volume-snapshot.yaml --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: <cephfs-pvc-snapshot-name> spec: volumeSnapshotClassName: cephfs-snapshot source: persistentVolumeClaimName: <cephfs-pvc-name> EOF
Replace the values in the
persistentVolumeClaimName
andname
fields.Create the Volume Snapshot.
~(keystone_admin)$ kubectl create -f cephfs-volume-snapshot.yaml
Confirm that it was created successfully.
~(keystone_admin)$ kubectl get volumesnapshots.snapshot.storage.k8s.io NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true csi-cephfs-pvc 1Gi cephfs-snapshot snapcontent-3953fe61-6c25-4536-9da5-efc05a216d27 3s 5s