Use Shared Resource (CPU)

In StarlingX the shared CPUs are allocated under Application pool. All unused CPUs are by default allocated under Application CPU pool.

Use the manifest example below for the shared CPU:

 cat <<EOF> sample-vm-diff-huge-pages-size.yaml
 apiVersion: kubevirt.io/v1
 kind: VirtualMachine
 metadata:
  name: ubuntu-bionic-1g
 spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: ubuntu-bionic
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
            - name: containervolume
              disk:
                bus: virtio
            - name: cloudinitvolume
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: "10Gi"
        memory:
          hugepages:
            pageSize: "1Gi"
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containervolume
          containerDisk:
            image: tedezed/ubuntu-container-disk:20.0
        - name: cloudinitvolume
         cloudInitNoCloud:
            userData: |-
              #cloud-config
              chpasswd:
                list: |
                  ubuntu:ubuntu
                  root:root
                expire: False
 EOF

Note

In VM manifest, the CPUs, which are not defined as dedicated, will be allocated from the Application (Shared) Pool.

Assign Dedicated Resources

Dedicated CPU Resources

Certain workloads requiring predictable latency and enhanced performance during execution would benefit from obtaining dedicated CPU resources. KubeVirt, relying on the Kubernetes CPU manager, is able to pin guest’s vCPUs to the host’s pCPUs.

Note

Enable CPU Manager on StarlingX host before using dedicated CPU resources.

Kubernetes does not provide CPU Manager detection, so you need to add CpuManager feature gate manually to KubeVirt CR:

apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration:
      featureGates:
      - LiveMigration
      - Macvtap
      - Snapshot
        - CPUManager

Then, check the label:

~(keystone_admin)]$ kubectl describe node | grep cpumanager cpumanager=true

Request Dedicated CPU Resources from Application CPU Pool

Setting spec.domain.cpu.dedicatedCpuPlacement to true in a VMI spec will indicate the desire to allocate dedicated CPU resource to the VMI.

Kubevirt will verify that all the necessary conditions are met, for the Kubernetes CPU manager to pin the virt-launcher container to dedicated host CPUs. Once, virt-launcher is running, the VMI’s vCPUs will be pinned to the pCPUS that has been dedicated for the virt-launcher container.

Expressing the desired amount of VMI’s vCPUs can be done by either setting the guest topology in spec.domain.cpu (sockets, cores, threads) or spec.domain.resources.[requests/limits].cpu to a whole number integer ([1-9]+) indicating the number of vCPUs requested for the VMI. Number of vCPUs is counted as sockets * cores * threads or if spec.domain.cpu is empty then it takes value from spec.domain.resources.requests.cpu or spec.domain.resources.limits.cpu.

Example of sample manifest file:

 apiVersion: kubevirt.io/v1
 kind: VirtualMachine
 metadata:
  name: ubuntu-bionic-dedicated-cpu
 spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: ubuntu-bionic
    spec:
      domain:
        cpu:
          cores: 2
          sockets: 1
          threads: 1
          dedicatedCpuPlacement: true
        devices:
          disks:
            - name: containervolume
              disk:
                bus: virtio
            - name: cloudinitvolume
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 2048M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containervolume
          containerDisk:
            image: tedezed/ubuntu-container-disk:20.0
        - name: cloudinitvolume
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              chpasswd:
                list: |
                  ubuntu:ubuntu
                  root:root
                expire: False

Isolated CPU Resources

StarlingX supports running the most critical low-latency applications on host CPUs which are completely isolated from the host process scheduler. This allows you to customize Kubernetes CPU management when the policy is set to static so that low-latency applications run with optimal efficiency.

Request Isolated CPU Resources

Note

Make sure StarlingX host is configured with isolated CPU cores.

Refer to documentation for isolated CPU Isolate CPU Cores to Enhance Application Performance.

By specifying the windriver.com/isolcpus: x in VM specs , allocates the CPUs from application isolated core pools.

Below is the example manifest requesting the isolated cores.

 apiVersion: kubevirt.io/v1
 kind: VirtualMachine
 metadata:
  name: ubuntu-bionic-isol-cores
 spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: ubuntu-bionic
    spec:
      domain:
        cpu:
          sockets: 1
          cores: 4
          threads: 1
          dedicatedCpuPlacement: true
        devices:
          disks:
            - name: containervolume
              disk:
                bus: virtio
            - name: cloudinitvolume
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 4Gi
            windriver.com/isolcpus: 4
          limits:
            memory: 4Gi
            windriver.com/isolcpus: 4
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containervolume
          containerDisk:
            image: tedezed/ubuntu-container-disk:20.0
        - name: cloudinitvolume
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              chpasswd:
                list: |
                  ubuntu:ubuntu
                  root:root
                expire: False

Huge Pages

Huge pages, in the context of Kubevirt VM, refers to a Linux kernel feature that allows for the efficient use of large pages, also known as large memory pages. Huge pages can be leveraged to enhance the performance of memory-intensive applications, especially those with large datasets.

Request Huge Pages

Note

Make sure StarlingX host is pre-configured with huge pages.

For more details on huge pages configuration, refer to Allocate Host Memory Using the CLI.

You can request the memory in form of huge pages by specifying the spec.domain.memory.hugePages.pageSize.

Example:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
 name: ubuntu-bionic-1g
spec:
 running: true
 template:
   metadata:
     labels:
       kubevirt.io/size: small
       kubevirt.io/domain: ubuntu-bionic
   spec:
     domain:
       cpu:
         cores: 1
       devices:
         disks:
           - name: containervolume
             disk:
               bus: virtio
           - name: cloudinitvolume
             disk:
               bus: virtio
         interfaces:
         - name: default
           masquerade: {}
       resources:
         requests:
           memory: "10Gi"
       memory:
         hugepages:
           pageSize: "1Gi"
     networks:
     - name: default
       pod: {}
     volumes:
       - name: containervolume
        containerDisk:
           image: tedezed/ubuntu-container-disk:20.0
       - name: cloudinitvolume
         cloudInitNoCloud:
           userData: |-
             #cloud-config
             chpasswd:
               list: |
                 ubuntu:ubuntu
                 root:root
               expire: False