Run Ansible Restore Playbook Remotely

In this method you can run Ansible Restore playbook and point to controller-0.

Prerequisites

  • It is recommended that you have Ansible version 2.7.5 or higher installed on your remote workstation. Copy the Ansible Backup/Restore playbooks from directory /usr/share/ansible/stx-ansible/playbooks/.

  • Your network has IPv6 connectivity before running Ansible Playbook, if the system configuration is IPv6.

Procedure

  1. Log in to the remote workstation.

    You can log in directly on the console or remotely using ssh.

  2. Provide an inventory file, either a customized one that is specified using the -i option, or the default one that is in the Ansible configuration directory (that is, /etc/ansible/hosts). You must specify the floating OAM IP of the controller host. For example, if the host name is stx_Cluster, the inventory file should have an entry called stx_Cluster.

    ---
    all:
      hosts:
        wc68:
          ansible_host: 128.222.100.02
        stx_Cluster:
          ansible_host: 128.224.141.74
  3. Run the Ansible Restore playbook:

    ~(keystone_admin)]$ ansible-playbook path-to-restore-platform-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars
    

    where optional-extra-vars can be:

    • To keep the Ceph cluster data intact (false - default option), use the following parameter when passing the extra arguments to the Ansible Restore playbook command:

      wipe_ceph_osds=false
      

      To wipe the Ceph cluster entirely (true), where the Ceph cluster will need to be recreated, or if the Ceph partition was previously wiped (for example, during a fresh install between backup and restore), or during a reinstall, use the following parameter:

      wipe_ceph_osds=true
      
    • To set a convenient place to store the backup files defined by initial-backup_dir on the system (such as the home folder for sysadmin, or /tmp, or even a mounted USB device), use the following parameter:

      on_box_data=true/false
      

      If this parameter is set to true, Ansible Restore playbook will look for the backup file provided on the target server. The parameter initial_backup_dir can be omitted from the command line. In this case, the backup file will be under /opt/platform-backup directory.

      If this parameter is set to false, the Ansible Restore playbook will look for a backup file provided on the Ansible controller. In this case, both the initial_backup_dir and backup_filename must be specified in the command.

    • backup_filename is the platform backup tar file. It must be provided using the -e option on the command line, for example:

      -e backup_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz
      
    • The initial_backup_dir is the location where the platform backup tar file is placed to restore the platform. It must be provided using -e option on the command line.

      Note

      When on_box_data=false, initial_backup_dir must be defined.

    • The admin_password, ansible_become_pass, and ansible_ssh_pass need to be set correctly using the -e option on the command line or in the Ansible secret file. ansible_ssh_pass is the password to the sysadmin user on controller-0.

    • If backup encryption was enabled during a platform backup then the options backup_encryption_enabled=true and backup_encryption_passphrase="<encryption_password>" are also required when restoring the platform. Consider storing the backup_encryption_passphrase in the Ansible secret file.

    • The ansible_remote_tmp should be set to a new directory (not required to create it ahead of time) under /home/sysadmin on controller-0 using the -e option on the command line.

      For example:

      ~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_platform.yml --limit stx_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* admin_password=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= stx_Cluster_system_backup_2019_08_08_15_25_36.tgz ansible_remote_tmp=/home/sysadmin/ansible-restore"

      Warning

      If ansible_remote_tmp is not set, /tmp will be used. /tmp can only hold 1GB.

      Below is sample output without ansible_remote_tmp:

      TASK [backup-restore/transfer-file : Transfer backup tarball to /scratch on controller-0] ***
      Wednesday 21 June 2023  13:59:28 +0000 (0:00:00.230)       0:00:51.283 ********
      fatal: [subcloud1]: FAILED! =>
       msg: |-
        failed to transfer file to /opt/platform-backup/subcloud1_platform_backup_2023_06_09_23_14_14.tgz /tmp/.ansible-sysadmin/tmp/ansible-tmp-1687355968.13-696694507261/source:
      
        scp: /tmp/.ansible-sysadmin/tmp/ansible-tmp-1687355968.13-696694507261/source: No space left on device
      
    • ssl_ca_certificate_file defines a single certificate or a bundle that contains all the ssl_ca certificates that will be installed during the restore.

      For example:

      ssl_ca_certificate_file=<complete path>/<ssl_ca certificates file>
      
      E.g.:
      
      -e "ssl_ca_certificate_file=/home/sysadmin/new_ca-cert.pem"
      

      Note

      In legacy restore, when this option is used, it replaces all ssl_ca certificates in the backup {{ with the one specified in ssl_ca_certificate_file.

      In the optimized restore, when this option is used, it adds certificates from ssl_ca_certificate_file to the existing ssl_ca certificates in the backup” }}.

    Note

    If the backup contains patches, Ansible Restore playbook will apply the patches and prompt you to reboot the system. Then you will need to re-run Ansible Restore playbook.

    Note

    After restore is completed it is not possible to restart (or rerun) the restore playbook.

  4. After running the restore_platform.yml playbook, you can restore the local registry images.

    Note

    The backup file of the local registry may be large. Restore the backed up file on the controller, where there is sufficient space.

    ~(keystone_admin)]$ ansible-playbook path-to-restore-user-images-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars
    

    where optional-extra-vars can be:

    • The backup_filename is the local registry backup tar file. It must be provided using the -e option on the command line, for example:

      -e backup_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz
      
    • The initial_backup_dir is the location on the Ansible control machine where the platform backup tar file is located. It must be provided using -e option on the command line.

    • The ansible_become_pass, and ansible_ssh_pass need to be set correctly using the -e option on the command line or in the Ansible secret file. ansible_ssh_pass is the password to the sysadmin user on controller-0.

    • The backup_dir should be set to a directory on controller-0. The directory must have sufficient space for local registry backup to be copied. The backup_dir is set using the -e option on the command line.

    • The ansible_remote_tmp should be set to a new directory on controller-0. Ansible will use this directory to copy files, and the directory must have sufficient space for local registry backup to be copied. The ansible_remote_tmp is set using the -e option on the command line.

    For example, run the local registry restore playbook, where /sufficient/space directory on the controller has sufficient space left for the archived file to be copied.

    ~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_user_images.ym --limit stx_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= stx_Cluster_docker_local_registry_backup_2020_07_15_21_24_22.tgz ansible_remote_tmp=/sufficient/space backup_dir=/sufficient/space"