Configure Container-backed Remote CLIs and Clients¶
The StarlingX command lines can be accessed from remote computers running Linux, MacOS, and Windows.
About this task
This functionality is made available using a docker container with pre-installed CLIs and clients. The container’s image is pulled as required by the remote CLI/client configuration scripts.
Prerequisites
You must have Docker installed on the remote systems you connect from. For more information on installing Docker, see https://docs.docker.com/install/. For Windows remote clients, Docker is only supported on Windows 10.
Note
You must be able to run docker commands using one of the following options:
Running the scripts using sudo
Adding the Linux user to the docker group
For more information, see, https://docs.docker.com/engine/install/linux-postinstall/
For Windows remote clients, you must run the following commands from a Cygwin terminal. See https://www.cygwin.com/ for more information about the Cygwin project.
For Windows remote clients, you must also have winpty installed. Download the latest release tarball for Cygwin from https://github.com/rprichard/winpty/releases. After downloading the tarball, extract it to any location and change the Windows <PATH> variable to include its bin folder from the extracted winpty folder.
The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole. If using a non-admin user such as one with privileges only within a private namespace, additional configuration is required in order to use helm. The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole.
Procedure
On the Controller, configure a Kubernetes service account for users on the remote client.
You must configure a Kubernetes service account on the target system and generate a configuration file based on that service account.
Run the following commands logged in as sysadmin on the local console of the controller.
Source the platform environment
$ source /etc/platform/openrc ~(keystone_admin)]$
Set environment variables.
You can customize the service account name and the output configuration file by changing the <USER> and <OUTPUT_FILE> variables shown in the following examples.
~(keystone_admin)]$ USER="admin-user" ~(keystone_admin)]$ OUTPUT_FILE="admin-kubeconfig"
Create an account definition file.
~(keystone_admin)]$ cat <<EOF > admin-login.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ${USER} namespace: kube-system --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: ${USER}-sa-token namespace: kube-system annotations: kubernetes.io/service-account.name: ${USER} --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: ${USER} namespace: kube-system EOF
Apply the definition.
~(keystone_admin)]$ kubectl apply -f admin-login.yaml
Store the token value.
~(keystone_admin)]$ TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}')
Store the OAM IP address.
~(keystone_admin)]$ OAM_IP=$(system oam-show |grep oam_floating_ip| awk '{print $4}')
AIO-SX uses <oam_ip> instead of <oam_floating_ip>. The following shell code ensures that <OAM_IP> is assigned the correct IP address.
~(keystone_admin)]$ if [ -z "$OAM_IP" ]; then OAM_IP=$(system oam-show |grep oam_ip| awk '{print $4}') fi
IPv6 addresses must be enclosed in square brackets. The following shell code does this for you.
~(keystone_admin)]$ if [[ $OAM_IP =~ .*:.* ]]; then OAM_IP="[${OAM_IP}]" fi
Change the permission to be readable.
~(keystone_admin)]$ touch ${OUTPUT_FILE} ~(keystone_admin)]$ sudo chown sysadmin:sys_protected ${OUTPUT_FILE} sudo chmod 644 ${OUTPUT_FILE}
Generate the admin-kubeconfig file.
~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-cluster wrcp-cluster --server=https://${OAM_IP}:6443 --insecure-skip-tls-verify ~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-credentials ${USER} --token=$TOKEN_DATA ~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} set-context ${USER}@wrcp-cluster --cluster=wrcp-cluster --user ${USER} --namespace=default ~(keystone_admin)]$ sudo kubectl config --kubeconfig ${OUTPUT_FILE} use-context ${USER}@wrcp-cluster
Copy the remote client tarball file from the StarlingX build servers to the remote workstation, and extract its content.
The tarball is available from the StarlingX area on the StarlingX CENGEN build servers.
You can extract the tarball’s contents anywhere on your client system.
$ cd $HOME $ tar xvf stx-remote-clients-<version>.tgz
Download the user/tenant openrc file from the Horizon Web interface to the remote workstation.
Log in to Horizon as the user and tenant that you want to configure remote access for.
In this example, the ‘admin’ user in the ‘admin’ tenant.
Navigate to Project > API Access > Download Openstack RC file.
Select Openstack RC file.
The file admin-openrc.sh downloads.
Note
For a Distributed Cloud system, navigate to Project > Central Cloud Regions > RegionOne > and download the Openstack RC file.
If HTTPS has been enabled for the StarlingX RESTAPI Endpoints on your StarlingX system, add the following line to the bottom of
admin-openrc.sh
:OS_CACERT="stx.ca.crt"
Copy
admin-openrc.sh
to the remote workstation.Copy the admin-kubeconfig file to the remote workstation.
You can copy the file to any location on the remote workstation. This example assumes that it is copied to the location of the extracted tarball.
On the remote workstation, configure remote CLI/client access.
This step will also generate a remote CLI/client RC file.
Change to the location of the extracted tarball.
$ cd $HOME/stx-remote-clients-<version>/
Create a working directory that will be mounted by the container implementing the remote CLIs.
See the description of the configure_client.sh -w option below for more details.
$ mkdir -p $HOME/remote_cli_wd
The CA certificate needs to be moved to the $HOME/remote_cli_wd directory.
Run the configure_client.sh script.
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p https://hub.docker.com/layers/starlingx/stx-platformclients:stx.8.0-v1.5.9-wrs.3
If you specify repositories that require authentication, as shown above, you must first perform a docker login to that repository before using remote CLIs. WRS AWS ECR credentials or a CA certificate is required.
The options for configure_client.sh are:
-t
The type of client configuration. The options are platform (for StarlingX CLI and clients) and openstack (for StarlingX OpenStack application CLI and clients).
The default value is platform.
-r
The user/tenant RC file to use for openstack CLI commands.
The default value is admin-openrc.sh.
-k
The kubernetes configuration file to use for kubectl and helm CLI commands.
The default value is temp-kubeconfig.
-o
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell, to setup required environment variables and aliases, before running any remote CLI commands.
For the platform client setup, the default is remote_client_platform.sh. For the openstack application client setup, the default is remote_client_app.sh.
-w
The working directory that will be mounted by the container implementing the remote CLIs. When using the remote CLIs, any files passed as arguments to the remote CLI commands need to be in this directory in order for the container to access the files. The default value is the directory from which the configure_client.sh command was run.
-p
Override the container image for the platform CLI and clients.
By default, the platform CLIs and clients container image is pulled from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p https://hub.docker.com/layers/starlingx/stx-platformclients:stx.8.0-v1.5.9-wrs.3
If you specify repositories that require authentication, you must first perform a docker login to that repository before using remote CLIs.
-a
Override the OpenStack application image.
By default, the OpenStack CLIs and clients container image is pulled from docker.io/starlingx/stx-openstackclients.
The configure-client.sh command will generate a remote_client_platform.sh RC file. This RC file needs to be sourced in the shell to set up required environment variables and aliases before any remote CLI commands can be run.
Copy the file remote_client_platform.sh to $HOME/remote_cli_wd
Postrequisites
After configuring the platform’s container-backed remote CLIs/clients, the remote platform CLIs can be used in any shell after sourcing the generated remote CLI/client RC file. This RC file sets up the required environment variables and aliases for the remote CLI commands.
Note
Consider adding this command to your .login or shell rc file, such that your shells will automatically be initialized with the environment variables and aliases for the remote CLI commands.
See Using Container-backed Remote CLIs and Clients for details.
Related information