Rook Migration Playbooks

Overview

To perform the migrations, you can run a set of Ansible playbooks that automate the entire process. There are four playbooks: three for standalone systems and one for DC environments to migrate subclouds.

Standalone System Playbooks

The standalone system playbooks support redeploy, export-import, and in-service migrations, providing maximum control over migration parameters. For details about each method, see Rook Migration Methods.

Each playbook includes a pre-check stage to verify that all prerequisites are met before the migration begins. For more details, see Rook Migration Prerequisites.

Before executing a playbook:

  • Ensure all applications are scaled down, such as, StarlingX OpenStack (and their VMs).

  • Set the ANSIBLE_CONFIG environment variable in the same command as the playbook. This variable references the custom Ansible configuration file containing settings that override the default configurations provided in the ansible-playbooks repository.

Note

The subclouds playbook, when run on the System Controller, does not require this variable.

If a playbook fails, you can simply re-run it after resolving any issues (such as reboots or swacts). As the playbooks use the cached data, they must always be executed from the same active controller. If a swact occurs, ensure the system is switched back to the original active controller before re-running the playbook; otherwise, the cache will be missing and the migration may fail.

If the playbook is manually aborted (for example, using Ctrl+C), a flag file is created and must be removed before the playbook can be executed again.

/run/.rook_migration_in_progress

Once the migration is finished, Rook Ceph will initiate a brief reapply. Additional StarlingX applications may also undergo a short reapply as part of the post-migration process.

Redeploy Migration Playbook

~(keystone_admin)$ ANSIBLE_CONFIG="/usr/share/ansible/stx-ansible/playbooks/vars/storage-backend-migration/ansible.cfg" \
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-redeploy.yaml -v

Export-Import Migration Playbook

~(keystone_admin)$ ANSIBLE_CONFIG="/usr/share/ansible/stx-ansible/playbooks/vars/storage-backend-migration/ansible.cfg" \
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-export-import.yaml -v

Additional variables that can be passed in with -e for the export-import migration are backups_path and erase_rbd_snapshots. For more information on these additional variables, see Additional Variables for all Migration Types.

In-Service Migration Playbook

~(keystone_admin)$ ANSIBLE_CONFIG="/usr/share/ansible/stx-ansible/playbooks/vars/storage-backend-migration/ansible.cfg" \
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-in-service.yaml -v

Additional variable that can be passed in with -e for the in-service migration is backups_path. For more information on the additional variables, see Additional Variables for all Migration Types.

DC Playbook for Migrating subclouds

To migrate subclouds from the System Controller, run the Subcloud migration playbook and specify the desired migration type. By default, the playbook processes up to five subclouds at a time, as defined in the default ansible.cfg from the ansible-playbooks repository.

Before running the playbook, create custom-inventory.yaml with the passwords.

---
# This is an example inventory file for use with the
# /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-subcloud.yaml
# playbook.
#
# To run the playbook, define an overrides file (as shown here)
# with the required variable settings and pass it as a parameter
# on the ansible command-line.
#
# Example ansible command:
# ansible-playbook /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-subcloud.yaml -v \
#     -i my-inventory-file.yaml \
#     --extra-vars "target_list=subcloud1 migration_type=redeploy"

# Use target_list to specify individual subclouds, or a comma-separated
# list of subclouds such as 'subcloud1,subcloud2'. To target all online
# subclouds at once, use 'target_list=all_online_subclouds'.
#
#
all:
  children:
    # This will be applied to all online subclouds.
    # Use the example below in hosts to override specific settings for a subcloud, such as passwords.
    target_group:
      vars:
        # SSH password to connect to all subclouds
        ansible_ssh_user: sysadmin
        ansible_ssh_pass: <sysadmin-pwd>
        # Sudo password
        ansible_become_pass: <sysadmin-pwd>
#      Add a child group, as shown below, if you need individual
#      overrides for specific subcloud hosts.
#      Use the hosts section to add the list of hosts.
#      Use the vars section to override target_group variables,
#      such as the ssh password.
#      Note that you can also override multiple hosts at once or
#      have multiple child groups if necessary.
#      Example:
#      children:
#        different_password_group:
#          vars:
#            ansible_ssh_user: sysadmin
#            ansible_ssh_pass: <sysadmin-pwd>
#            ansible_become_pass: <sysadmin-pwd>
#          hosts:
#            subcloud1:
#            subcloud2:

To run the playbook, you must provide the migration_type (redeploy, export-import, or in-service) and the target_list as extra variables. The target_list may contain a comma-separated list of subclouds or all_online_subclouds to run the migration on all online subclouds.

~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/storage-backend-migration-subcloud.yaml -v \
-i custom-inventory.yaml \
--extra-vars "target_list=subcloud1 migration_type=redeploy"

By default, the migration runs on five subclouds at a time, based on Ansible’s default forks value of 5. To migrate more subclouds at the same time, use the --forks=<number> parameter when invoking the playbook. If the playbook is interrupted manually (Ctrl+C) after migration has started on the subclouds, the file /run/.rook_migration_in_progress must be removed from each Subcloud to allow the migration to be restarted.

New Deployment Manager File Pointing to Rook Ceph

The migration generates a new Deployment Manager file that points to Rook Ceph instead of Bare Metal Ceph. This file is required if the system needs to be reinstalled. By default, this file is saved in /opt/platform-backup/storage-backend-migration.

For subclouds, this new file is stored in the same directory inside the Subcloud.

An alternate path can be specified by using the relevant extra variable. For more information, see Additional Variables for all Migration Types. The Deployment Manager file name is rook-ceph-migration-<system name>-dm-file.yaml.

TASK [New DM file reminder] ****************************************************
Friday 16 January 2026  00:45:30 +0000 (0:00:00.662)       0:29:50.484 ********
ok: [controller-0] =>
msg: A new DM file was generated after the migration and it is saved in /opt/platform-backup/storage-backend-migration/rook-ceph-migration-yow-wrcp-dc-016-sc1-dm-file.yaml

If a Subcloud reinstall is required, you must use the new deployment manager file generated during migration. Since the file resides on the Subcloud, you will need to manually pull it from the Subcloud to the system controller before initiating a reinstall. Additionally, note that a backup taken on Bare Metal Ceph cannot be restored on Rook Ceph and vice versa.

Additional Variables for All Migration Types

The following extra variables can be passed with -e for any migration playbook:

  • cgts_vg_min_required_free_gib: Integer between 1 and 100 (default: 20 GiB). Used during the pre-check stage to ensure there is enough free space to create the required Rook Ceph monitors. This value typically does not need to be changed unless performing VM-based testing with very limited space.

  • minimum_platform_mem_workers: Integer between 1000 and 100000 (default: 7000 MiB). Specifies the minimum platform memory that will be configured when reinstalling storage nodes as workers.

  • storage_backend_migration_backup_dir: Directory path. Defaults to /opt/platform-backup/storage-backend-migration. Used to store the final generated deployment manager file in a custom directory, if needed.

  • backups_path: String (directory path). Specifies the directory used to store data backups. The default location is /mnt/migration, which resides in the custom logical volume migration-lv created inside cgts-vg. This parameter is only used when the kube-rbdkube-system pool is present during an in-service migration, or when performing an export-import migration.

  • erase_rbd_snapshots: Boolean. Applicable only to export-import migrations. When set to true, this parameter bypasses the RBD snapshot pre-check and deletes existing RBD snapshots (but not the PVC data created from those snapshots). The default value is false.

Cache and Logs for Standalone and Distributed Cloud Systems

  • Ansible cache directory is generated in a hidden directory located at:

    /home/sysadmin/.storage-backend-migration-cache

  • Log output for the standalone system playbook is available at /home/sysadmin/storage-backend-migration-ansible.log. During a Distributed Cloud migration, you can also monitor logs in real time on each Subcloud using this log file.

  • After all Subcloud procedures have completed, consolidated logs are available on the System Controller at: /home/sysadmin/ansible.log.