Rook Migration Using CLI and API¶
This section describes how to initiate the storage backend migration using either the system CLI or the Sysinv API.
Note
The API (system command) supports only cluster redeploy migration type.
After the migration is initiated, you can monitor progress through the CLI, API, and system alarms. For advanced control or special options, it is recommended to run the Storage Backend Migration Playbooks.
Migration Using the System CLI¶
Procedure
Retrieve the Ceph storage-backend UUID.
~(keystone_admin)$ system storage-backend-list
Example output:
+--------------------------------------+------------+---------+------------+------+----------+-------------------------------------------------------------------------+ | uuid | name | backend | state | task | services | capabilities | +--------------------------------------+------------+---------+------------+------+----------+-------------------------------------------------------------------------+ | 1dad3acc-d787-4d5c-984b-0854d6581de6 | ceph-store | ceph | configured | None | None | replication: 1 min_replication: 1 | +--------------------------------------+------------+---------+------------+------+----------+-------------------------------------------------------------------------+
Run the following command to initiate the migration.
~(keystone_admin)$ system storage-backend-modify <uuid> replication_type=<replication-type>
Example
~(keystone_admin)$ system storage-backend-modify 1dad3acc-d787-4d5c-984b-0854d6581de6 migration_type=redeploy
Sample response:
+----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | backend | ceph | | name | ceph-store | | state | configured | | task | None | | services | None | | capabilities | replication: 1 | | | min_replication: 1 | | object_gateway | False | | ceph_total_space_gib | 0 | | object_pool_gib | None | | cinder_pool_gib | None | | kube_pool_gib | None | | glance_pool_gib | None | | ephemeral_pool_gib | None | | tier_name | storage | | tier_uuid | a8b3cc97-898a-410b-b7a7-e9a185ebed9a | | network | mgmt | | created_at | 2025-11-26T00:13:46.105354+00:00 | | updated_at | None | +----------------------+--------------------------------------+
Note
If an error occurs, rerun the same command. The migration resumes from the last completed step.
Migration Using the API¶
Procedure
Obtain an authentication token for the API.
token=$(curl -si http://controller IP:5000/v3/auth/tokens \ -X POST -H 'Content-Type: application/json' \ -d '{"auth": {"identity": {"methods": ["password"], "password": {"user": {"domain": {"name": "Default"}, "name": "admin", "password": <user password>}}}, "scope": {"project": {"domain": {"name": "Default"}, "name": "admin"}}}}' \ | awk '/X-Subject-Token/{print $2}' | tr -d '\r') echo $tokenObtain the ceph storage-backend UUID.
curl http://controller IP:6385/v1/storage_backend \ -X GET -H "Content-Type: application/json" \ -H "X-Auth-Token: ${token}"The ceph-store UUID will appear in the response. Use this in the next request.
Trigger the migration request.
Send a PATCH request to v1/storage_backend/<uuid> with the following body:
[{"path": "/capabilities", "value": "{\"migration_type\": \"redeploy\"}", "op": "replace"}]Example using curl:
curl http://controller IP:6385/v1/storage_backend/1dad3acc-d787-4d5c-984b-0854d6581de6 \ -X PATCH -H "Content-Type: application/json" \ -H "X-Auth-Token: ${token}" \ -d '[{"path": "/capabilities", "value": "{\"migration_type\": \"redeploy\"}", "op": "replace"}]'Note
If an error occurs, resend the same request. The migration continues from the last completed step.
Track Migration Progress¶
When you list the storage backend, additional fields appear under the
capabilities column for the storage backend:
migration_typemigration_statusmigration_step
These fields show the migration type, status, and the current step of the process.
Example:
~(keystone_admin)$ system storage-backend-list
Sample output:
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
| 1dad3acc-d787-4d5c-984b-0854d6581de6 | ceph-rook-store | ceph-rook | configuring-with-app | None | block,filesystem | replication: 1 migration_step: remove-baremetal migration_type: |
| | | | | | | redeploy min_replication: 1 deployment_model: controller |
| | | | | | | migration_status: in-progress has_long_running_operations: true |
| | | | | | | |
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
Using the API, you can retrieve the same information by sending a GET request
to /v1/storage_backend. The migration fields are included in the response body.
Additionally, a new alarm is raised to indicate that the migration is in progress.
~(keystone_admin)$ fm alarm-list --uuid
+--------------------------------------+-------+-------------------------------------------------------------------------------+--------------------------------------+----------+---------------------+
| UUID | Alarm | Reason Text | Entity ID | Severity | Time Stamp |
| | ID | | | | |
+--------------------------------------+-------+-------------------------------------------------------------------------------+--------------------------------------+----------+---------------------+
| 1389acfc-7740-4ba4-85db-4af577001214 | 250. | controller-0 Configuration is out-of-date. (applied: 6a79abd4-1cc2-4a2d-926a- | host=controller-0 | major | 2025-11-26T10:18:35 |
| | 001 | 1fdfb2d57e04 target: ea79abd4-1cc2-4a2d-926a-1fdfb2d57e04) | | | .466029 |
| | | | | | |
| bf9e3adb-b891-4175-8cd5-025a079ddc22 | 800. | Rook Migration is in progress | storage_backend=1dad3acc-d787-4d5c- | minor | 2025-11-26T10:18:06 |
| | 001 | | 984b-0854d6581de6 | | .545694 |
| | | | | | |
+--------------------------------------+-------+-------------------------------------------------------------------------------+--------------------------------------+----------+---------------------+
~(keystone_admin)$ fm alarm-show bf9e3adb-b891-4175-8cd5-025a079ddc22
+------------------------+------------------------------------------------------+
| Property | Value |
+------------------------+------------------------------------------------------+
| alarm_id | 800.001 |
| alarm_state | set |
| alarm_type | environmental |
| degrade_affecting | False |
| entity_instance_id | storage_backend=1dad3acc-d787-4d5c-984b-0854d6581de6 |
| entity_type_id | storage_backend |
| mgmt_affecting | True |
| probable_cause | congestion |
| proposed_repair_action | No action required. |
| reason_text | Rook Migration is in progress |
| service_affecting | True |
| severity | minor |
| suppression | False |
| suppression_status | unsuppressed |
| timestamp | 2025-11-26T10:18:06.545694 |
| uuid | bf9e3adb-b891-4175-8cd5-025a079ddc22 |
+------------------------+------------------------------------------------------+
In case of error, the migration_status in the capabilities fields is updated to failed:
~(keystone_admin)$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
| 1dad3acc-d787-4d5c-984b-0854d6581de6 | ceph-rook-store | ceph-rook | configuring-with-app | None | block,filesystem | replication: 1 migration_step: remove-baremetal migration_type: |
| | | | | | | redeploy min_replication: 1 deployment_model: controller |
| | | | | | | migration_status: failed has_long_running_operations: true |
| | | | | | | |
+--------------------------------------+-----------------+-----------+----------------------+------+------------------+--------------------------------------------------------------------+
An alarm is raised to report a migration error:
~(keystone_admin)$ fm alarm-show 82d52574-8013-4bf9-ab85-e40d17c8b81c
+------------------------+------------------------------------------------------+
| Property | Value |
+------------------------+------------------------------------------------------+
| alarm_id | 800.001 |
| alarm_state | set |
| alarm_type | environmental |
| degrade_affecting | False |
| entity_instance_id | storage_backend=1dad3acc-d787-4d5c-984b-0854d6581de6 |
| entity_type_id | storage_backend |
| mgmt_affecting | True |
| probable_cause | receive-failure |
| proposed_repair_action | Check ansible.log, fix and re-run migration. |
| reason_text | Error during Rook Migration |
| service_affecting | True |
| severity | major |
| suppression | False |
| suppression_status | unsuppressed |
| timestamp | 2025-11-26T11:02:59.168296 |
| uuid | 82d52574-8013-4bf9-ab85-e40d17c8b81c |
+------------------------+------------------------------------------------------+
Logs and Ansible Cache¶
Ansible cache directory is generated in a hidden directory located at:
/home/sysadmin/.storage-backend-migration-cacheLog output for the standalone system playbook:
/home/sysadmin/storage-backend-migration-ansible.log