Install Kubectl and Helm Clients Directly on a Host

As an alternative to using the container-backed Remote CLIs for kubectl and helm, you can install these commands directly on your remote host.

About this task

Kubectl and helm installed directly on the remote host provide the best CLI behaviour, especially for CLI commands that reference local files or require a shell.

The following procedure shows you how to configure the kubectl and kubectl clients directly on a remote host, for an admin user with cluster-admin clusterrole. If using a non-admin user with only role privileges within a private namespace, additional configuration is required in order to use helm.

Prerequisites

You will need the following information from your StarlingX administrator:

  • the floating OAM IP address of the StarlingX

  • login credential information; in this example, it is the “TOKEN” for a local Kubernetes ServiceAccount.

  • your kubernetes namespace

Procedure

  1. On the workstation, install the kubectl client on an Ubuntu host by performing the following actions on the remote Ubuntu system.

    1. Install the kubectl client CLI.

      % sudo apt-get update
      % sudo apt-get install -y apt-transport-https
      % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
      % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      % sudo apt-get update
      % sudo apt-get install -y kubectl
      
    2. Set up the local configuration and context.

      Note

      In order for your remote host to trust the certificate used by the StarlingX K8s API, you must ensure that the k8s_root_ca_cert provided by your StarlingX administrator is a trusted CA certificate by your host. Follow the instructions for adding a trusted CA certificate for the operating system distribution of your particular host.

      If your administrator does not provide a k8s_root_ca_cert at the time of installation, then specify –insecure-skip-tls-verify, as shown below.

      % kubectl config set-cluster mycluster --server=https://<$CLUSTEROAMIP>:6443 --insecure-skip-tls-verify
      % kubectl config set-credentials dave-user@mycluster --token=$MYTOKEN
      % kubectl config set-context dave-user@mycluster --cluster=mycluster --user admin-user admin-user@mycluster --namespace=$MYNAMESPACE
      % kubectl config use-context dave-user@mycluster
      
    3. Test remote kubectl access.

      % kubectl get pods -o wide
      NAME            READY STATUS RE-    AGE IP           NODE      NOMINA- READINESS
                                   STARTS                           TED NODE GATES
      nodeinfo-648f.. 1/1  Running   0    62d 172.16.38.83  worker-4 <none> <none>
      nodeinfo-648f.. 1/1  Running   0    62d 172.16.97.207 worker-3 <none> <none>
      nodeinfo-648f.. 1/1  Running   0    62d 172.16.203.14 worker-5 <none> <none>
      tiller-deploy.. 1/1  Running   0    27d 172.16.97.219 worker-3 <none> <none>
      
  2. On the workstation, install the helm client on an Ubuntu host by performing the following actions on the remote Ubuntu system.

    1. Install helm client.

      % wget https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
      % tar xvf helm-v2.13.1-linux-amd64.tar.gz
      % sudo cp linux-amd64/helm /usr/local/bin
      

      In order to use helm, additional configuration is required. For more information, see Configuring Remote Helm Client.