Metrics Server

This is a pre-release feature and may not function as described in StarlingX 5 documentation.

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

It collects resource metrics from Kubelets, exposing them as part of Kubernetes apiserver through the Metrics API, which can be used via Kubernetes’ Horizontal Pod Autoscaler definitions. Also, it can be used directly by end user’s containerized applications to, for example, enable application-specific load management mechanisms.

Metrics being collected by Metrics Server can be accessed by kubectl top. This is available for debugging autoscaling pipelines.

For more information see: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands/.

Metrics API use cases

Use kubectl autoscaler to scale pods automatically

About this task

It is possible to use Kubernetes autoscaler to scale up and down a Kubernetes deployment based on the load. Please refer to the official example https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ to create a PHP application which scales horizontally.

Prerequisites

You must have uploaded and applied the metrics system application described in Install Metrics Server before proceeding.

Procedure

After application deployment has completed, you can create a horizontal pod autoscaling (hpa) definition for the deployment as follows:

  1. Use the following command to turn on autoscaling: .. code-block:

    ~(keystone_admin)$ kubectl autoscale deployment <your-application> --cpu-percent=50 --min=1 --max=10
    
  2. Use the following command to see the created horizontal pod autoscaler:

    ~(keystone_admin)$ kubectl get hpa
    
  3. When the incoming load to your application deployment increases and the percentage of CPU for existing replicas exceed the previously specified threshold, a new replica will be created. For the PHP example above, use the following command to increase the incoming load:

    ~(keystone_admin)$ kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
    
  4. (Optional) Use the following commands to check if replicas were created: .. code-block:

    ~(keystone_admin)$ kubectl get hpa
    

    or

    ~(keystone_admin)$ kubectl get deployment <your-application>
    

    If you delete the pod load-generator it will decrease the number of replicas automatically.

Using Metrics API directly within your container

It is also possible to use the metrics API directly within your containerized application in order to trigger application-specific load management.

The Metrics API consists of the following GET endpoints under the base path /apis/metrics.k8s.io/v1beta1:

/nodes

All node metrics

/nodes/{node}

Metrics for a specified node

/namespaces/{namespace}/pods

All pod metrics within namespace that support all-namespaces

/namespaces/{namespace}/pods/{pod}

Metrics for a specified pod

/pods

All pod metrics of all namespaces

Sample application

This NodeJS-based application requests metrics every second printing them in the console.

For a sample containerized application that uses the Metrics API, see: https://opendev.org/starlingx/metrics-server-armada-app/src/branch/master/sample-app.

All the requirements to deploy and run the sample application are captured in the sample-app.yml file: service account, roles and role binding that allow the application to communicate with the apiserver, pod.

The application pulls the token associated with the service account from its default location (/var/run/secrets/kubernetes.io/serviceaccount/token) in order to perform authenticated requests to the /apis/metrics.k8s.io/v1beta1/pods endpoint.

Sample application structure

- sample-app.yml
- Dockerfile
- src
    - package.json
    - sample-application.js
sample-app.yml

Contains sample-app Kubernetes Deployment, Cluster Role, Cluster Role Binding and Service Account

src

Contains NodeJS application

Dockerfile

Application Dockerfile

Run sample application

Procedure

  1. Run the following command to deploy the application using the sample-app.yml file:

    ~(keystone_admin)$ kubectl apply -f sample-app.yml
    
  2. Run the following command to check if the application pod is running:

    ~(keystone_admin)$ kubectl get pods -n sample-application-ns
    
  3. Run the following command to view the logs and check if the sample application is requesting successfully the Metrics Server API:

    ~(keystone_admin)$ kubectl logs -n sample-application-ns pod-name --tail 1 -f
    

See also