Kubernetes Memory Manager Policies

Kubernetes memory manager policies manage memory allocation for pods with a focus on NUMA topology and performance optimization. You can define the policy using the kube-memory-mgr-policy host label via CLI, with the supported values.

The kube-memory-mgr-policy host label supports the values none (default) and static.

For example:

~(keystone)admin)$ system host-lock worker-1
~(keystone)admin)$ system host-label-assign --overwrite worker-1 kube-memory-mgr-policy=static
~(keystone)admin)$ system host-unlock worker-1

Setting to static, the policy ensures NUMA-aware memory allocation for guaranteed QoS pods, reserving memory to meet their requirements and reduce latency. Memory for system processes can also be reserved using the --reserved-memory flag, enhancing node stability. For the BestEffort and Burstable pods, no memory is reserved, and the default topology hints are used.

This approach enables better performance for workloads that require predictable memory usage, but requires careful configuration to ensure compatibility with system resources.

For configuration options and detailed examples, consult the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/.

Limitations

The interaction between the kube-memory-mgr-policy=static policy and the topology manager policy restricted can cause pods not to be scheduled or started, even when there is sufficient memory available. This is due to the restrictive design of the NUMA-aware memory manager, which prevents the same NUMA node from being used for both single and multi-NUMA allocations. It is important that you understand the implications of these memory management policies and configure your systems accordingly to avoid unexpected failures.

For detailed configuration options and examples, refer to the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/.