Configure Container-backed Remote CLIs and Clients¶
The StarlingX command lines can be accessed from remote computers running Linux, MacOS, and Windows.
About this task
This functionality is made available using a docker container with pre-installed CLIs and clients. The container’s image is pulled as required by the remote CLI/client configuration scripts.
Prerequisites
You must have a WAD or Local LDAP username and password to get the Kubernetes authentication token, a Keystone username and password to log into Horizon, the OAM IP and, optionally, the Kubernetes CA certificate of the target StarlingX environment.
You must have Docker installed on the remote systems you connect from. For more information on installing Docker, see https://docs.docker.com/install/. For Windows remote clients, Docker is only supported on Windows 10.
Note
You must be able to run docker commands using one of the following options:
Running the scripts using sudo
Adding the Linux user to the docker group
For more information, see, https://docs.docker.com/engine/install/linux-postinstall/
For Windows remote clients, you must run the following commands from a Cygwin terminal. See https://www.cygwin.com/ for more information about the Cygwin project.
For Windows remote clients, you must also have winpty installed. Download the latest release tarball for Cygwin from https://github.com/rprichard/winpty/releases. After downloading the tarball, extract it to any location and change the Windows <PATH> variable to include its bin folder from the extracted winpty folder.
The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole. If using a non-admin user such as one with privileges only within a private namespace, additional configuration is required in order to use helm. The following procedure shows how to configure the Container-backed Remote CLIs and Clients for an admin user with cluster-admin clusterrole.
Procedure
In the active controller, log in through SSH or local console using sysadmin user and do the actions listed below.
Configure Kubernetes permissions for users.
Source the platform environment
$ source /etc/platform/openrc ~(keystone_admin)]$
Create a user rolebinding file. You can customize the name of the user. Alternatively, to use group rolebinding and user group membership for authorization, see Configure Users, Groups, and Authorization
~(keystone_admin)]$ MYUSER="admin-user" ~(keystone_admin)]$ cat <<EOF > admin-user-rolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ${MYUSER}-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: ${MYUSER} EOF
Apply the rolebinding.
~(keystone_admin)]$ kubectl apply -f admin-user-rolebinding.yaml
Note the OAM IP address to be used later in the creation of the kubeconfig file.
~(keystone_admin)]$ system oam-show | grep oam_floating_ip | awk '{print $4}'
Use the command below in AIO-SX environments. AIO-SX uses <oam_ip> instead of <oam_floating_ip>.
~(keystone_admin)]$ system oam-show | grep oam_ip | awk '{print $4}'
Copy the public certificate of the Root CA that anchors StarlingX REST API and Web Server’s SSL certificate to the remote workstation.
If the certificate in your system is anchored by the platform’s issuer (
system-local-ca
), you can do this using the following commands:~(keystone_admin)]$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/stx.ca.crt ~(keystone_admin)]$ scp /home/sysadmin/stx.ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/stx.ca.crt
Optional: copy the Kubernetes CA certificate
/etc/kubernetes/pki/ca.crt
from the active controller to the remote workstation. This step is strongly recommended, but it still possible to connect to the Kubernetes cluster without this certificate.~(keystone_admin)]$ scp /etc/kubernetes/pki/ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/k8s-ca.crt
In the remote workstation, do the actions listed below.
Create a working directory that will be mounted by the container implementing the remote CLIs.
See the description of the configure_client.sh -w option below for more details.
$ mkdir -p $HOME/remote_cli_wd
Copy the remote client tarball file from the StarlingX build servers to the remote workstation, and extract its content.
The tarball is available from the StarlingX Public build servers.
You can extract the tarball’s contents anywhere on your client system.
$ cd $HOME $ tar xvf stx-remote-clients-<version>.tgz
Download the user/tenant openrc file from the Horizon Web interface to the remote workstation.
Log in to Horizon as the user and tenant that you want to configure remote access for.
In this example, the ‘admin’ user in the ‘admin’ tenant.
Navigate to Project > API Access > Download Openstack RC file.
Select Openstack RC file.
The file
admin-openrc.sh
downloads. Copy this file to the location of the extracted tarball.
Note
For a Distributed Cloud system, navigate to Project > Central Cloud Regions > RegionOne > and download the Openstack RC file.
Add the following line to the bottom of
admin-openrc.sh
file with the the filename for the public certificate from the Root CA that anchors StarlingX REST API and Web Server’s SSL certificate in the remote workstation:export OS_CACERT="stx.ca.crt"
Copy
admin-openrc.sh
to the remote workstation.Create an empty admin-kubeconfig file on the remote workstation using the following command.
$ touch admin-kubeconfig
Configure remote CLI/client access.
This step will also generate a remote CLI/client RC file.
Change to the location of the extracted tarball.
$ cd $HOME/stx-remote-clients-<version>/
Run the configure_client.sh script.
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:stx.10.0-v1.5.17
If you specify repositories that require authentication, as shown above, you must first perform a docker login to that repository before using remote CLIs. WRS AWS ECR credentials or a CA certificate is required.
The options for configure_client.sh are:
-t
The type of client configuration. The options are platform (for StarlingX CLI and clients) and openstack (for StarlingX OpenStack application CLI and clients).
The default value is platform.
-r
The user/tenant RC file to use for openstack CLI commands.
The default value is admin-openrc.sh.
-k
The kubernetes configuration file to use for kubectl and helm CLI commands.
The default value is temp-kubeconfig.
-o
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell, to setup required environment variables and aliases, before running any remote CLI commands.
For the platform client setup, the default is remote_client_platform.sh. For the openstack application client setup, the default is remote_client_app.sh.
-w
The working directory that will be mounted by the container implementing the remote CLIs. When using the remote CLIs, any files passed as arguments to the remote CLI commands need to be in this directory in order for the container to access the files. The default value is the directory from which the configure_client.sh command was run.
-p
Override the container image for the platform CLI and clients.
By default, the platform CLIs and clients container image is pulled from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:stx.10.0-v1.5.17
If you specify repositories that require authentication, you must first perform a docker login to that repository before using remote CLIs.
-a
Override the OpenStack application image.
By default, the OpenStack CLIs and clients container image is pulled from docker.io/starlingx/stx-openstackclients.
The configure-client.sh command will generate a
remote_client_platform.sh
RC file. This RC file needs to be sourced in the shell to set up required environment variables and aliases before any remote CLI commands can be run.
Copy the file
remote_client_platform.sh
to$HOME/remote_cli_wd
Update the contents in the admin-kubeconfig file using the kubectl command from the container. Use the OAM IP address and the Kubernetes CA certificate acquired in the steps above. If the OAM IP is IPv6, use the IP enclosed in brackets (example: “[fd00::a14:803]”).
$ cd $HOME/remote_cli_wd $ source remote_client_platform.sh $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 $ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt) $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER} $ kubectl config use-context ${MYUSER}@wrcpcluster
If you don’t have the Kubernetes CA certificate, execute the following commands instead.
$ cd $HOME/remote_cli_wd $ source remote_client_platform.sh $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER} $ kubectl config use-context ${MYUSER}@wrcpcluster
Postrequisites
After configuring the platform’s container-backed remote CLIs/clients, the remote platform CLIs can be used in any shell after sourcing the generated remote CLI/client RC file. This RC file sets up the required environment variables and aliases for the remote CLI commands.
Note
Consider adding this command to your .login or shell rc file, such that your shells will automatically be initialized with the environment variables and aliases for the remote CLI commands.
See Using Container-backed Remote CLIs and Clients for details.
Related information