Rook Migration Methods¶
Overview¶
StarlingX offers multiple migration options for moving from Bare Metal Ceph to Rook Ceph. Every migration method introduces an application outage, as the process involves removing Bare Metal Ceph and deploying Rook Ceph, which cannot be done without a service interruption.
Note
Before the migration starts, PVCs must not be attached to any running pods and users must scale down their applications and wait until the migration is complete before scaling them back up. For other prerequisites, see Rook Migration Prerequisites.
AIO-SX, AIO-DX, AIO-DX with Worker nodes, and Standard Systems with Controller Storage support the following migration methods.
Note
Standard Systems with Dedicated Storage Nodes support only Cluster Redeploy because the storage resides on dedicated nodes. This method wipes all storage devices and rebuilds the Ceph environment using Rook Ceph. No user data is preserved.
In-Service Migration¶
Preserves user data while converting OSDs from Filestore to Bluestore and transitioning storage to Rook Ceph with controlled downtime. For AIO-SX systems:
Requires at least two OSDs to proceed with migration.
When operating with replica 1, AIO-SX systems must have sufficient free space to take an OSD out of service (mark it out/down).
For AIO-DX and Standard Systems:
Since these configurations always operate with at least replica 2, an entire host can be wiped and migrated in a single step.
Procedure
Migrate all disks from Filestore to Bluestore while the system is still running on Bare Metal Ceph.
Remove Bare Metal Ceph and back up the cluster configuration data.
Deploy a Rook Ceph Pacific (v16) cluster and rebuild the keyrings and mon database (monstore) using the old Bare Metal Ceph data.
Rebuild the filesystem in the Pacific-based Rook Ceph cluster.
Remove the Pacific Rook Ceph deployment while retaining the cluster configuration.
Install the Rook Ceph application running Reef (v18) and rebuild the keyrings and mon database (monstore) using the Pacific data.
Rebuild the filesystem in the Reef-based Rook Ceph cluster.
Recreate the persistent volumes.
Export/Import Migration¶
Preserves user data by exporting all Ceph data, redeploying Rook Ceph, then restoring the data. This method provides a clean storage redeployment while still maintaining data continuity.
Procedure
Export the RBD and CephFS data while running on Bare Metal Ceph, compressing the data as part of the process.
Remove Bare Metal Ceph, save the configuration details used by the deployment (monitors, OSD layout, and so on).
Install the Rook Ceph application running Reef (version 18), using the same configuration parameters previously used by Bare Metal Ceph.
Recreate the persistent volumes.
Import the compressed RBD and CephFS backups in the toolbox pod, decompressing the data during the import process.
Cluster Redeploy Migration¶
Performs a fast and clean transition to Rook Ceph without preserving user data. All storage devices are wiped, and the user must redeploy applications that interact with PVs/PVCs.
Procedure
Remove the existing Bare Metal Ceph environment. Since this migration does not preserve data, no configuration backup is necessary.
Deploy the Rook Ceph application running Reef (v18).
Recreate all PVC that will be used by applications.
For Standard Systems with Dedicated Storage Nodes, after the migration, existing storage nodes are reinstalled as worker nodes. These new workers are configured with resource reservations that allocate as much capacity as possible to the platform.
Details about the resource reservation of these new workers:
Processor 0: - Reserve two-thirds of the platform memory - Reserve all cores except one for the platform
Processor 1 (if present): - Reserve 1 GB of platform memory - Reserve 1 core for the platform
Advantages and Disadvantages of Migration Methods¶
Migration Method |
Advantages |
Disadvantages |
|---|---|---|
In-Service Migration |
|
|
Export/Import |
|
|
Cluster Redeploy |
|
|