Run Ansible Restore Playbook RemotelyΒΆ
In this method you can run Ansible Restore playbook and point to controller-0.
Prerequisites
- It is recommended that you have Ansible version 2.7.5 or higher installed on your remote workstation. Copy the Ansible Backup/Restore playbooks from directory - /usr/share/ansible/stx-ansible/playbooks/.
- Your network has IPv6 connectivity before running Ansible Playbook, if the system configuration is IPv6. 
Procedure
- Log in to the remote workstation. - You can log in directly on the console or remotely using ssh. 
- Provide an inventory file, either a customized one that is specified using the - -ioption, or the default one that is in the Ansible configuration directory (that is, /etc/ansible/hosts). You must specify the floating OAM IP of the controller host. For example, if the host name is stx_Cluster, the inventory file should have an entry called stx_Cluster.- --- all: hosts: wc68: ansible_host: 128.222.100.02 stx_Cluster: ansible_host: 128.224.141.74
- Run the Ansible Restore playbook: - ~(keystone_admin)]$ ansible-playbook path-to-restore-platform-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars - where optional-extra-vars can be: - Optional: You can select one of the following restore modes: - To keep Ceph data intact (false - default option), use the following parameter: - wipe_ceph_osds=false 
- To start with an empty Ceph cluster (true), where the Ceph cluster will need to be recreated, use the following parameter: - wipe_ceph_osds=true 
- To indicate that the backup data file is under /opt/platform-backup directory on the local machine, use the following parameter: - on_box_data=true - If this parameter is set to false, the Ansible Restore playbook expects both the initial_backup_dir and backup_filename to be specified. 
 
- The backup_filename is the platform backup tar file. It must be provided using the - -eoption on the command line, for example:- -e backup\_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz 
- The initial_backup_dir is the location on the Ansible control machine where the platform backup tar file is placed to restore the platform. It must be provided using - -eoption on the command line.
- The admin_password, ansible_become_pass, and ansible_ssh_pass need to be set correctly using the - -eoption on the command line or in the Ansible secret file. ansible_ssh_pass is the password to the sysadmin user on controller-0.
- The ansible_remote_tmp should be set to a new directory (not required to create it ahead of time) under /home/sysadmin on controller-0 using the - -eoption on the command line.
 - For example: - ~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_platform.yml --limit stx_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* admin_password=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= stx_Cluster_system_backup_2019_08_08_15_25_36.tgz ansible_remote_tmp=/home/sysadmin/ansible-restore" - Warning - If - ansible_remote_tmpis not set,- /tmpwill be used.- /tmpcan only hold 1GB.- An example of what happens when - ansible_remote_tmpis not used:- TASK [backup-restore/transfer-file : Transfer backup tarball to /scratch on controller-0] *** Wednesday 21 June 2023 13:59:28 +0000 (0:00:00.230) 0:00:51.283 ******** fatal: [subcloud1]: FAILED! => msg: |- failed to transfer file to /opt/platform-backup/subcloud1_platform_backup_2023_06_09_23_14_14.tgz /tmp/.ansible-sysadmin/tmp/ansible-tmp-1687355968.13-696694507261/source: scp: /tmp/.ansible-sysadmin/tmp/ansible-tmp-1687355968.13-696694507261/source: No space left on device - Note - If the backup contains patches, Ansible Restore playbook will apply the patches and prompt you to reboot the system. Then you will need to re-run Ansible Restore playbook. 
- After running the restore_platform.yml playbook, you can restore the local registry images. - Note - The backup file of the local registry may be large. Restore the backed up file on the controller, where there is sufficient space. - ~(keystone_admin)]$ ansible-playbook path-to-restore-user-images-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars - where optional-extra-vars can be: - The backup_filename is the local registry backup tar file. It must be provided using the - -eoption on the command line, for example:- -e backup_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz 
- The initial_backup_dir is the location on the Ansible control machine where the platform backup tar file is located. It must be provided using - -eoption on the command line.
- The ansible_become_pass, and ansible_ssh_pass need to be set correctly using the - -eoption on the command line or in the Ansible secret file. ansible_ssh_pass is the password to the sysadmin user on controller-0.
- The backup_dir should be set to a directory on controller-0. The directory must have sufficient space for local registry backup to be copied. The backup_dir is set using the - -eoption on the command line.
- The ansible_remote_tmp should be set to a new directory on controller-0. Ansible will use this directory to copy files, and the directory must have sufficient space for local registry backup to be copied. The ansible_remote_tmp is set using the - -eoption on the command line.
 - For example, run the local registry restore playbook, where /sufficient/space directory on the controller has sufficient space left for the archived file to be copied. - ~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_user_images.ym --limit stx_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= stx_Cluster_docker_local_registry_backup_2020_07_15_21_24_22.tgz ansible_remote_tmp=/sufficient/space backup_dir=/sufficient/space" 
