Kubernetes Authentication & Authorization¶
Kubernetes API User Authentication to use OIDC DEX¶
Overview of OIDC DEX backend Identity Providers¶
Kubernetes is configured by default to support OIDC Authentication using the ‘oidc-auth-apps’ OIDC DEX Proxy Identity Provider. Refer to the procedures in this section to validate or customize the configuration.
Centralized vs Distributed OIDC Authentication Setup in Distributed Cloud¶
In a Distributed Cloud configuration, you can configure OIDC authentication in a centralized or distributed setup.
Centralized setup: The oidc-auth-apps OIDC Identity Provider runs only on the System Controller, with both the System Controller and all subcloud Kubernetes clusters configured to use this central oidc-auth-apps OIDC Identity Provider.
Distributed setup: The oidc-auth-apps OIDC Identity Provider runs on both the System Controller and subclouds. The System Controller’s Kubernetes cluster uses the System Controller’s oidc-auth-apps OIDC Identity Provider, while subcloud’s Kubernetes cluster uses their local oidc-auth-apps OIDC Identity Provider.
By default, during bootstrap or installation of the System Controller and subclouds, OIDC authentication is configured in the centralized setup, using Local LDAP on the System Controller.
Centralized Setup¶
The default Distributed Cloud configuration uses a centralized OIDC setup. In this centralized setup, oidc‑auth‑apps runs only on the System Controller, and all kube‑apiserver instances, both on the System Controller and across all subclouds, authenticate tokens against the OIDC issuer provided by the System Controller. In this setup, users log in through the System Controller’s OIDC identity provider, obtain an OIDC token, and can use it to access any subcloud and the System Controller.
For a centralized OIDC authentication setup, use the following procedure:
Procedure
Configure the oidc-auth-apps only on the System Controller. For more information, see Local OIDC IdP Proxy Dex.
Configure the kube-apiserver parameters on the System Controller and each subcloud either during bootstrapping or by using the system service-parameter-add kubernetes kube_apiserver command after bootstrapping the system, using the System Controller’s floating OAM IP address as the oidc-issuer-url for all clouds.
For example,
oidc-issuer-url=https://<central-cloud-floating-ip>:<oidc-auth-apps-dex -service-NodePort>/dexon the subcloud.For more information, see:
Distributed Setup¶
For a distributed OIDC authentication setup, use the following procedure:
Procedure
Configure the oidc-auth-apps on both System Controller and on subclouds.
All clouds oidc-auth-apps can be configured to communicate to the same or different authentication servers (Windows Active Directory and/or LDAP). However, each cloud manages OIDC tokens individually. A user must login, authenticate, and get an OIDC token for each cloud independently.
For more information, see Local OIDC IdP Proxy Dex.
Configure the kube-apiserver parameters on the System Controller using the System Controller’s floating OAM IP address as the oidc-issuer-url. Configure each subcloud’s Kubernetes cluster to use its local subcloud oidc-auth-apps OIDC Identity Provider.
For more information, see:
Postrequisites
For more information on configuring Users, Groups, Authorization, and kubectl for the user and retrieving the token on subclouds, see:
Configure Kubernetes for OIDC Token Validation while Bootstrapping the System¶
Kubernetes cluster’s Kube-apiserver is configured by default to validate OIDC tokens using ‘oidc-auth-apps’ OIDC DEX Identity Provider.
About this task
Use this procedure only when you need to override the default OIDC parameters during bootstrap.
The values set in this procedure can be changed at any time using service parameters as described in Configure Kubernetes for OIDC Token Validation after Bootstrapping the System.
Procedure
Configure the Kubernetes cluster kube-apiserver by adding the following parameters to the localhost.yml file, during bootstrap:
# cd ~ # cat <<EOF > /home/sysadmin/localhost.yml apiserver_oidc: client_id: stx-oidc-client-app issuer_url: https://<oam-floating-ip>:<oidc-auth-apps-dex-service-NodePort>/dex username_claim: name groups_claim: groups EOF
where:
<oidc-auth-apps-dex-service-NodePort>
is the port to be configured for the NodePort service for dex in oidc-auth-apps. The default is 30556.
The values of the username_claim, and groups_claim parameters could vary for different user and groups configurations in your Windows Active Directory or LDAP server.
Note
For IPv6 deployments, ensure that the IPv6 OAM floating address in the issuer_url is,
https://\[<oam-floating-ip>\]:30556/dex(that is, in lower case, and wrapped in square brackets).
For more information on OIDC Authentication for subclouds, see Centralized vs Distributed OIDC Authentication Setup.
Configure Kubernetes for OIDC Token Validation after Bootstrapping the System¶
Kubernetes cluster’s kube-apiserver OIDC token validation is configured by default to use the ‘oidc-auth-apps’ OIDC DEX Identity Provider.
Use this procedure to modify OIDC‑related settings after bootstrap, such as changing the issuer URL, client ID, or claims.
About this task
As an alternative to performing this configuration at bootstrap time as described in Configure Kubernetes for OIDC Token Validation while Bootstrapping the System, you can do so at any time using service parameters.
Procedure
Set the following service parameters using the system service-parameter-modify kubernetes kube_apiserver command.
For example:
~(keystone_admin)$ system service-parameter-modify kubernetes kube_apiserver oidc-client-id=stx-oidc-client-app ~(keystone_admin)$ system service-parameter-modify kubernetes kube_apiserver oidc-issuer-url=https://${OAMIP}:<oidc-auth-apps-dex-service-NodePort>/dex ~(keystone_admin)$ system service-parameter-modify kubernetes kube_apiserver oidc-username-claim=name ~(keystone_admin)$ system service-parameter-modify kubernetes kube_apiserver oidc-groups-claim=groupswhere:
<oidc-auth-apps-dex-service-NodePort> is the port to be configured for the NodePort service for dex in oidc-auth-apps. The default is 30556.
The values of the oidc-username-claim, and oidc-groups-claim parameters could vary for different user and groups configurations in your Windows Active Directory or LDAP server.
Note
For IPv6 deployments, ensure that the IPv6 OAM floating address is,
https://\[<oam-floating-ip>\]:30556/dex(that is, in lower case, and wrapped in square brackets).The valid combinations of these service parameters are:
none of the parameters
oidc-issuer-url, oidc-client-id, and oidc-username-claim
oidc-issuer-url, oidc-client-id, oidc-username-claim, and oidc-groups-claim
Note
Historical service parameters for OIDC with underscores are still accepted: oidc_client_id, oidc_issuer_url, oidc_username_claim and oidc_groups_claim. These are equivalent to: oidc-client-id, oidc-issuer-url, oidc-username-claim and oidc-groups-claim.
Apply the service parameters.
~(keystone_admin)]$ system service-parameter-apply kubernetes
For more information on OIDC Authentication for subclouds, see Centralized vs Distributed OIDC Authentication Setup.
Configure Kubernetes Client Access¶
You can configure Kubernetes access for local and remote clients to authenticate through Windows Active Directory or LDAP server using oidc-auth-apps OIDC Identity Provider (dex).
Configure Kubernetes Local Client Access¶
About this task
Use the procedure below to configure Kubernetes access for a user logged in to the active controller either through SSH or by using the system console.
Note
If the user ssh/console access is to be authenticated using an External WAD or LDAP server, refer also to SSH Authentication.
Procedure
Execute the commands below to create the Kubernetes configuration file for the logged in user. These commands only need to be executed once. The file “~/.kube/config” will be created. The user referred in its contents is the current logged in user.
~$ kubeconfig-setup ~$ source ~/.profile
Run oidc-auth script in order to authenticate and update user credentials in the Kubernetes configuration file.
~$ oidc-auth
Note
The oidc-auth script has the following optional parameters that may need to be specified:
--cacert <path to ca cert file>: The path provides CA certificate that validates the server certificate specified by the-coption. By default, the command reads the value ofOS_CACERTenvironment variable. If none is specified, the command accesses the server without verifying its certificate.-c <OIDC_app_IP>: This is the IP where the OIDC app is running. When not provided, it defaults to “oamcontroller”, that is an alias to the controller floating OAM IP. There are two instances where this parameter is used: for local client access inside subclouds of a centralized setup, where the oidc-auth-apps runs only on the System Controller, and for remote client access.-p <password>: This is the user password. If the user does not enter the password, the user is prompted to do so. This parameter is essential in non-interactive shells.-u <username>: This is the user to be authenticated. When not provided, it defaults to the current logged in user. Usually, this parameter is needed in remote client access scenarios, where the current logged in user is different from the user to be authenticated.-b <backend_ID>: This parameter is used to specify the backend used for authentication. It is only needed if there is more than one backend configured at oidc-auth-apps OIDC Identity Provider (Dex).
Configure Kubernetes Remote Client Access¶
Kubernetes Remote Client Access using the Host Directly¶
Procedure
Install the kubectl client CLI on the host. Follow the instructions on Install and Set Up kubectl on Linux. The example below can be used for Ubuntu.
% sudo apt-get update % sudo apt-get install -y apt-transport-https % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list % sudo apt-get update % sudo apt-get install -y kubectl
Contact your system administrator for the StarlingX system-local-ca CA certificate. Copy this certificate to your system as
stx-ca.crt.Create an empty Kubernetes configuration file (the default path is
~/.kube/config). Run the commands below to update this file. Use the OAM IP address and the system-local-ca CA certificate acquired in the previous step. If the OAM IP is IPv6, use the IP enclosed in brackets (example:[fd00::a14:803]). In the example below, the user isadmin-user. Change it to the name of user you want to authenticate.$ MYUSER="admin-user" $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 $ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 stx-ca.crt) $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER} $ kubectl config use-context ${MYUSER}@wrcpclusterGet a Kubernetes authentication token. There are two options, the first is through oidc-auth script and the second is through the browser. Both options are described below.
To get the token through oidc-auth script, execute the steps below.
Install “Python Mechanize” module using the following command:
$ sudo pip install mechanize
Install the oidc-auth from a StarlingX mirror.
Execute the command below to get the token and update it in the Kubernetes configuration file. If the target environment has multiple backends configured, you will need to use the parameter
-b <backend_ID>. If the target environment is a DC system with a centralized setup, you should use the OAM IP of the System Controller.$ oidc-auth -u ${MYUSER} -c <OAM_IP>
To get the token through a browser, execute the steps below.
Use the following URL to login into oidc-auth-apps OIDC client:
https://<oam-floating-ip-address>:30555. If the target environment is a DC system with a centralized setup, you should use the OAM IP of the System Controller.If the StarlingX oidc-auth-apps has been configured for multiple ‘ldap’ connectors, select the Windows Active Directory or the LDAP server for authentication.
Enter your Username and Password.
Click Login. The ID token and Refresh token are displayed as follows:
ID Token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjQ4ZjZkYjcxNGI4ODQ5ZjZlNmExM2Y2ZTQzODVhMWE1MjM0YzE1NTQifQ.eyJpc3MiOiJodHRwczovLzEyOC4yMjQuMTUxLjE3MDozMDU1Ni9kZXgiLCJzdWIiOiJDZ2R3ZG5SbGMzUXhFZ1JzWkdGdyIsImF1ZCI6InN0eC1vaWRjLWNsaWVudC1hcHAiLCJleHAiOjE1ODI1NzczMTksImlhdCI6MTU4MjU3NzMwOSwiYXRfaGFzaCI6ImhzRG1kdTFIWGFCcXFNLXBpYWoyaXciLCJlbWFpbCI6InB2dGVzdDEiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6InB2dGVzdDEifQ.TEZ-YMd8kavTGCw_FUR4iGQWf16DWsmqxW89ZlKHxaqPzAJUjGnW5NRdRytiDtf1d9iNIxOT6cGSOJI694qiMVcb-nD856OgCvU58o-e3ZkLaLGDbTP2mmoaqqBYW2FDIJNcV0jt-yq5rc9cNQopGtFXbGr6ZV2idysHooa7rA1543EUpg2FNE4qZ297_WXU7x0Qk2yDNRq-ngNQRWkwsERM3INBktwQpRUg2na3eK_jHpC6AMiUxyyMu3o3FurTfvOp3F0eyjSVgLqhC2Rh4xMbK4LgbBTN35pvnMRwOpL7gJPgaZDd0ttC9L5dBnRs9uT-s2g4j2hjV9rh3KciHQ Access Token: wcgw4mhddrk7jd24whofclgmj Claims: { "iss": "https://128.224.151.170:30556/dex", "sub": "CgdwdnRlc3QxEgRsZGFw", "aud": "stx-oidc-client-app", "exp": 1582577319, "iat": 1582577319, "at_hash": "hsDmdu1HXaBqqM-piaj2iw", "email": "testuser", "email_verified": true, "groups": [ "billingDeptGroup", "managerGroup" ], "name": "testuser" } Refresh Token: ChljdmoybDZ0Y3BiYnR0cmp6N2xlejNmd3F5Ehlid290enR5enR1NWw1dWM2Y2V4dnVlcHliUse the token ID to set the Kubernetes credentials in kubectl configs:
$ TOKEN=<ID_token_string> $ kubectl config set-credentials ${MYUSER} --token ${TOKEN}
Kubernetes API User Authorization¶
Configure Users, Groups, and Authorization¶
In the examples provided below, Kubernetes permissions will be given to daveuser user. Two different ways to do this are presented: in the first option, daveuser user is directly bound to a role; in the second option, daveuser is indirectly associated to a Kubernetes group that has permissions.
Note
For larger environments, like a DC with many subclouds, or to minimize Kubernetes custom cluster configurations, use the second option, where permissions are granted through Kubernetes groups. Apply the kubernetes RBAC policy to the central cloud and to each subcloud where kubernetes permissions are required.
Grant Kubernetes permissions through direct role binding¶
Create the following deployment file and deploy the file with kubectl apply -f <filename>.
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: daveuser-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: daveuser
Grant Kubernetes permissions through groups¶
Create the following deployment file and deploy the file with kubectl apply -f <filename>.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-reader-role rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-reader-rolebinding subjects: - kind: Group name: k8s-reader apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-reader-role apiGroup: rbac.authorization.k8s.io --- # Note: the ClusterRole "cluster-admin" already exists in the system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-admin-rolebinding subjects: - kind: Group name: k8s-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
Create the groups k8s-reader and k8s-admin in your Windows Active Directory or LDAP server. See Microsoft documentation on Windows Active Directory for additional information on adding users and groups to Windows Active Directory.
To give Kubernetes permissions to daveuser, add this user in either the k8s-reader or k8s-admin groups in your Windows Active Directory or LDAP server, depending on the permissions you want to grant. The permissions are given because there is a mapping between a Windows Active Directory or LDAP group and a Kubernetes group with same name. To remove Kubernetes permissions from daveuser user, remove this user from k8s-reader and k8s-admin groups in your Windows Active Directory or LDAP server.
Note
The group names k8s-reader and k8s-admin are arbitrary. As long as the Windows Active Directory or LDAP group have the same name as the Kubernetes group, the mapping will happen. For example, if a more company-specific approach is preferred, the groups k8s-reader and k8s-admin groups could be named after departments, like billingDeptGroup and managerGroup.
Private Namespace and Restricted RBAC¶
A non-admin type user typically does not have permissions for any cluster-scoped resources and only has read and/or write permissions to resources in one or more namespaces.
About this task
Note
All of the RBAC resources for managing non-admin type users, although they may apply to private namespaces, are created in kube-system such that only admin level users can manager non-admin type users, roles, and rolebindings.
The following example creates a non-admin service account called dave-user with read/write type access to a single private namespace (billing-dept-ns).
Note
The following example creates and uses ServiceAccounts as the user mechanism and subject for the rolebindings, however the procedure equally applies to user accounts defined in an external Windows Active Directory as the subject of the rolebindings.
Procedure
If it does not already exist, create a general user role defining restricted permissions for general users.
This is of the type ClusterRole so that it can be used in the context of any namespace when binding to a user.
Create the user role definition file.
% cat <<EOF > general-user-clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: general-user rules: # For the core API group (""), allow full access to all resource types # EXCEPT for resource policies (limitranges and resourcequotas) only allow read access - apiGroups: [""] resources: ["bindings", "configmaps", "endpoints", "events", "persistentvolumeclaims", "pods", "podtemplates", "replicationcontrollers", "secrets", "serviceaccounts", "services"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: [ "limitranges", "resourcequotas" ] verbs: ["get", "list"] # Allow full access to all resource types of the following explicit list of apiGroups. # Notable exceptions here are: # ApiGroup ResourceTypes # ------- ------------- # policy podsecuritypolicies, poddisruptionbudgets # networking.k8s.io networkpolicies # admissionregistration.k8s.io mutatingwebhookconfigurations, validatingwebhookconfigurations # - apiGroups: ["apps", "batch", "extensions", "autoscaling", "apiextensions.k8s.io", "rbac.authorization.k8s.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # Cert Manager API access - apiGroups: ["cert-manager.io", "acme.cert-manager.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] EOFApply the definition.
~(keystone_admin)$ kubectl apply -f general-user-cluster-role.yaml
Create the billing-dept-ns namespace, if it does not already exist.
~(keystone_admin)$ kubectl create namespace billing-dept-ns
Create both the dave-user service account and the namespace-scoped RoleBinding.
The RoleBinding binds the general-user role to the dave-user ServiceAccount for the billing-dept-ns namespace.
Create the account definition file.
% cat <<EOF > dave-user.yaml apiVersion: v1 kind: ServiceAccount metadata: name: dave-user namespace: kube-system --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: dave-user-sa-token namespace: kube-system annotations: kubernetes.io/service-account.name: dave-user --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dave-user namespace: billing-dept-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: general-user subjects: - kind: ServiceAccount name: dave-user namespace: kube-system EOFApply the definition.
% kubectl apply -f dave-user.yaml
If the user requires use of the local docker registry, create an openstack user account for authenticating with the local docker registry.
If a project does not already exist for this user, create one.
% openstack project create billing-dept-ns
Create an openstack user in this project.
% openstack user create --password P@ssw0rd \ --project billing-dept-ns dave-user
Note
Substitute a password conforming to your password formatting rules for P@ssw0rd.
Create a secret containing these userid/password credentials for use as an ImagePullSecret
% kubectl create secret docker-registry registry-local-dave-user --docker-server=registry.local:9001 --docker-username=dave-user --docker-password=P@ssw0rd --docker-email=noreply@windriver.com -n billing-dept-ns
dave-user can now push images to registry.local:9001/dave-user/ and use these images for pods by adding the secret above as an ImagePullSecret in the pod spec.
If this user requires the ability to use helm, do the following.
Create a ClusterRole for reading namespaces, if one does not already exist.
% cat <<EOF > namespace-reader-clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: namespace-reader rules: - apiGroups: [""] resources: ["namespaces"] verbs: ["get", "watch", "list"] EOF
Apply the configuration.
% kubectl apply -f namespace-reader-clusterrole.yaml
Create a RoleBinding for the tiller account of the user’s namespace.
Note
The tiller account of the user’s namespace must be named ‘tiller’.
% cat <<EOF > read-namespaces-billing-dept-ns-tiller.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: read-namespaces-billing-dept-ns-tiller subjects: - kind: ServiceAccount name: tiller namespace: billing-dept-ns roleRef: kind: ClusterRole name: namespace-reader apiGroup: rbac.authorization.k8s.io EOF
Apply the configuration.
% kubectl apply -f read-namespaces-billing-dept-ns-tiller.yaml
Resource Management¶
Kubernetes supports two types of resource policies, LimitRange and ResourceQuota.
LimitRange¶
By default, containers run with unbounded resources on a Kubernetes cluster. Obviously this is bad as a single Pod could monopolize all available resources on a worker node. A LimitRange is a policy to constrain resource allocations (for Pods or Containers) in a particular namespace.
Specifically a LimitRange policy provides constraints that can:
Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
Enforce a ratio between request and limit for a resource in a namespace.
Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
See https://kubernetes.io/docs/concepts/policy/limit-range/ for more details.
An example of LimitRange policies for the billing-dept-ns namespace of the example in Private Namespace and Restricted RBAC is shown below:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-container-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "700Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
---
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-pod-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "2"
memory: "2Gi"
type: Pod
---
apiVersion: v1
kind: LimitRange
metadata:
name: pvc-limit
namespace: billing-dept-ns
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 3Gi
min:
storage: 1Gi
---
apiVersion: v1
kind: LimitRange
metadata:
name: memory-ratio-pod-limit
namespace: billing-dept-ns
spec:
limits:
- maxLimitRequestRatio:
memory: 10
type: Pod
ResourceQuota¶
A ResourceQuota policy object provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project. ResourceQuota limits can be created for cpu, memory, storage and resource counts for all standard namespaced resource types such as secrets, configmaps, etc.
See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.
An example of ResourceQuota policies for the billing-dept-ns namespace of Private Namespace and Restricted RBAC is shown below:
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-quotas
namespace: billing-dept-ns
spec:
hard:
persistentvolumeclaims: "1"
services.loadbalancers: "2"
services.nodeports: "0"
Pod Security Admission Controller¶
Pod Security Admission (PSA) Controller is the PSP replacement, and this document describes the PSA functionality.
The PSA controller acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the policies defined by Pod Security Standards.
Pod Security levels¶
Pod Security Admission levels refer to the 3 policies defined by the Pod Security Standards: privileged, baseline, and restricted.
- Privileged
Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations. It aims at system- and infrastructure-level workloads managed by privileged, trusted users.
- Baseline
Minimally restrictive policy which prevents known privilege escalations. It aims at ease of adoption for common containerized workloads for non-critical applications.
- Restricted
Heavily restricted policy, following current Pod hardening best practices. It is targeted at operators and developers of security-critical applications, as well as lower-trust users.
Pod Security Admission labels for namespaces¶
Pod security restrictions are applied at the namespace level.
With PSA feature enabled, namespaces can be configured to define the admission control mode to be used for pod security in each namespace. Kubernetes defines a set of labels to set predefined Pod Security levels for a namespace. The label will define what action the controller control plane takes if a potential violation is detected.
A namespace can configure any or all modes, or set different levels for different modes. The modes are:
- enforce
Policy violations will cause the pod to be rejected.
- audit
Policy violations will trigger the addition of an audit annotation to the event recorded in the Kubernetes audit log but are otherwise allowed.
- warn
Policy violations will trigger a user-facing warning but are otherwise allowed.
For each mode, there are two labels that determine the policy used.
This is a generic namespace configuration using labels.
# label indicates which policy level to apply for the mode.
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# LEVEL must be one of `privileged`, `baseline`, or `restricted`.
pod-security.kubernetes.io/<MODE>: <LEVEL>
# Optional: per-mode version label can be used to pin the policy to the
# version that shipped with a given Kubernetes minor version (e.g. v1.24).
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# VERSION must be a valid Kubernetes minor version, or `latest`.
pod-security.kubernetes.io/<MODE>-version: <VERSION>
For more information refer to https://kubernetes.io/docs/concepts/security/pod-security-admission/.
Configure defaults for the Pod Security Admission Controller¶
The PSA controller can be configured with default security polices and exemptions at bootstrap time.
The Default PSA controller configuration will apply to namespaces that are
not configured with the pod-security.kubernetes.io labels to specify a
security level and mode. For example if you display the namespace description
using kubectl describe namespace <namespace> and the
pod-security.kubernetes.io labels are not displayed, then the behavior of
the namespace will follow the default PSA labels’ level, mode and version
configuration set with Pod Security plugin of the AdmissionConfiguration
resource.
To configure cluster-wide default policies and/or exemptions, the
Pod Security plugin of the AdmissionConfiguration resource can be used. The
AdmissionConfiguration resource is configurable at bootstrap time with the
api-server_extra_args and apiserver_extra_volumes overrides in the
localhost.yml file.
Any policy that is applied via namespace labels will take precedence.
Example of configuration added to localhost.yml:
apiserver_extra_args:
admission-control-config-file: "/etc/kubernetes/admission-control-config-file.yaml"
apiserver_extra_volumes:
- name: admission-control-config-file
mountPath: "/etc/kubernetes/admission-control-config-file.yaml"
pathType: "File"
readOnly: true
content: |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "privileged"
enforce-version: "latest"
audit: "privileged"
audit-version: "latest"
warn: "privileged"
warn-version: "latest"
See Kubernetes Custom Configuration for more details on kubernetes
configuration, apiserver_extra_args and apiserver_extra_volumes.
The generic definition of the AdmissionConfiguration resource can be found
at
https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/.
Platform namespaces configuration¶
In preparation for PSA controller full support, namespace labels have been
added to all the namespaces used by the platform. System namespaces, such as
kube-system, deployment, as well as application namespaces such as,
cert-manager have been created by default with privileged label levels.
The following labels configuration is applied by default to Platform namespaces:
controller-0:~$ kubectl describe namespace kube-system
Name: kube-system
Labels: kubernetes.io/metadata.name=kube-system
pod-security.kubernetes.io/audit=privileged
pod-security.kubernetes.io/audit-version=latest
pod-security.kubernetes.io/enforce=privileged
pod-security.kubernetes.io/enforce-version=latest
pod-security.kubernetes.io/warn=privileged
pod-security.kubernetes.io/warn-version=latest
Annotations: <none>
Status: Active
No resource quota.
No LimitRange resource
Pod Security Admission Controller - Usage Example¶
This page walks through a usage example of PSA where you will:
Create a namespace for each of the 3 security policies levels: privileged, baseline and restricted.
Create a yaml file with a privileged pod configuration.
Create a privileged pod in all 3 namespaces.
The pod creation will only be successful in the privileged namespace.
controller-0:~$ vi baseline-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: baseline-ns
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: baseline
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: baseline
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f baseline-ns.yaml
controller-0:~$ vi privileged-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: privileged-ns
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: privileged
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f privileged-ns.yaml
controller-0:~$ vi restricted-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: restricted-ns
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f restricted-ns.yaml
controller-0:~$ vi privileged-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: privileged
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
controller-0:~$ kubectl -n privileged-ns apply -f privileged-pod.yaml
pod/privileged created
controller-0:~$ kubectl -n baseline-ns apply -f privileged-pod.yaml
Error from server (Failure): error when creating "privileged-pod.yaml": privileged (container "pause" must not set securityContext.privileged=true)
controller-0:~$ kubectl -n restricted-ns apply -f privileged-pod.yaml
Error from server (Failure): error when creating "privileged-pod.yaml": privileged (container "pause" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "pause" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pause" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pause" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pause" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
controller-0:~$
For more information refer to https://kubernetes.io/docs/concepts/security/pod-security-admission/.
Deprovision LDAP Server Authentication & Authorization¶
You can remove Windows Active Directory or LDAP authentication from StarlingX.
Procedure
Remove the configuration of kube-apiserver to use oidc-auth-apps for authentication.
Determine the UUIDs of parameters used in the kubernetes kube-apiserver group.
These include oidc-client-id, oidc-groups-claim, oidc-issuer-url and oidc-username-claim.
~(keystone_admin)]$ system service-parameter-list
Delete each parameter.
~(keystone_admin)]$ system service-parameter-delete <UUID>
Apply the changes.
~(keystone_admin)]$ system service-parameter-apply kubernetes
Uninstall oidc-auth-apps.
~(keystone_admin)]$ system application-remove oidc-auth-apps
Clear the helm-override configuration.
~(keystone_admin)]$ system helm-override-update oidc-auth-apps dex kube-system --reset-values ~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system ~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --reset-values ~(keystone_admin)]$ system helm-override-show oidc-auth-apps oidc-client kube-system ~(keystone_admin)]$ system helm-override-update oidc-auth-apps secret-observer kube-system --reset ~(keystone_admin)]$ system helm-override-show oidc-auth-apps secret-observer kube-system
Remove secrets that contain certificate data. Depending on your configuration, some secrets listed below may not exist.
~(keystone_admin)]$ kubectl delete secret dex-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret oidc-auth-apps-certificate -n kube-system ~(keystone_admin)]$ kubectl delete secret wad-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret local-ldap-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret local-dex.tls -n kube-system ~(keystone_admin)]$ kubectl delete secret dex-client-secret -n kube-system
Remove any RBAC RoleBindings added for OIDC users and/or groups.
For example:
$ kubectl delete clusterrolebinding testuser-rolebinding $ kubectl delete clusterrolebinding billingdeptgroup-rolebinding
Kubernetes API User Access¶
Access Kubernetes CLI locally from SSH/Local Console Session¶
You can access the system via a local CLI from the active controller node’s local console or by SSH-ing to the OAM floating IP Address.
It is highly recommended that only ‘sysadmin’ and a small number of admin level user accounts be allowed to SSH to the system.
Using the sysadmin account and the Local CLI, you can perform all required system maintenance, administration and troubleshooting tasks.
For sysadmin Account¶
By default, the sysadmin account has Kubernetes Admin credentials.
If you plan on customizing the sysadmin’s kubectl configuration on the
StarlingX Controller, (for example, kubectl config set-... or
or oidc-auth), you should use a private KUBECONFIG file and NOT
the system-managed KUBECONFIG file /etc/kubernetes/admin.conf, which can be
changed and overwritten by the system.
Copy /etc/kubernetes/admin.conf to a private file under
/home/sysadmin such as /home/sysadmin/.kube/config, and update
/home/sysadmin/.profile to have the KUBECONFIG environment variable
point to the private file.
For example, the following commands set up a private KUBECONFIG file.
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% mkdir .kube
% cp /etc/kubernetes/admin.conf .kube/config
% echo "export KUBECONFIG=~/.kube/config" >> ~/.profile
% exit
Confirm that the KUBECONFIG environment variable is set correctly
and that kubectl commands are functioning properly.
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% env | fgrep KUBE
KUBECONFIG=/home/sysadmin/.kube/config
% kubectl get pods
You can now access all Kubernetes CLI commands.
kubectl commands
Kubernetes commands are executed with the kubectl command
For example:
~(keystone_admin)]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller-0 Ready master 5d19h v1.13.5
~(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dashboard-kubernetes-dashboard-7749d97f95-bzp5w 1/1 Running 0 3d18h
Helm commands
Helm commands are executed with the helm command
For example:
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress
For an LDAP Account¶
Use kubeconfig-setup to setup KUBECONFIG for local environment.
$ kubeconfig-setup
$ source ~/.profile
Use oidc-auth to authenticate via OIDC/LDAP.
$ oidc-auth
Using "joefulladmin" as username.
Password:
Successful authentication.
Updated /home/joefulladmin/.kube/config .
- Use
kubectlto test access to kubernetes commands / resources (admin and non-admin).
# Displaying anything in 'kube-system' namespace requires 'cluster-admin' privileges
$ kubectl -n kube-system get secrets
NAME TYPE DATA AGE
ceph-admin Opaque 2 3d8h
ceph-pool-kube-cephfs-data kubernetes.io/cephfs 4 3d8h
ceph-pool-kube-rbd kubernetes.io/rbd 2 3d8h
# Anyone can display resources in 'default' namespace
$ kubectl -n default get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d9h
Access Kubernetes CLIs and GUI Remotely¶
For details on how to setup and use remote access to StarlingX CLIs and GUI, see Remote Access.
Access Kubernetes REST APIs¶
Access the Kubernetes REST API with the URL prefix of https://<oam-floating-ip-address>:6443
and using the API syntax described at The Kubernetes API.
Configure Kubeconfig for OIDC Login¶
StarlingX supports the oidc-login plugin for kubectl. It provides an improved user experience when using OIDC authentication with Kubernetes CLI (kubectl).
With kubectl configured to use oidc-login, you can run kubectl commands directly. If there is no token or an expired token in the kubectl cache, then the plugin will open a browser for OIDC login (with MFA if applicable). If login is successful, the kubectl cache will be updated automatically with the resulting token, and kubectl will continue and send the command.
Note
If a valid browser session is available which previously logged into the OIDC IdP, such that browser cookies can still access a valid token, then user login is bypassed and the existing token is used.
Note
oidc-auth is still available, however, it works only with LDAP or
WAD backends on OIDC/Dex.
Prerequisites
Extract and save the system-local-ca.crt certificate.
~(keystone_admin)]$ kubectl -n cert-manager get secret system-local-ca -o jsonpath='{.data.ca\.crt}' | base64 -d > system-local-ca.crt
Save the system-local-ca.crt file in the same directory as the kubeconfig file created in the following steps.
Local kubectl¶
Follow these steps to configure external IdP:
Procedure
Create local OIDC login Kubeconfig.
Set the following variables with your cluster-specific values:
CA=system-local-ca.crt FLOATING_OAM_IP=128.224.49.105 OIDC_CLIENT_ID=stx-oidc-client-app OIDC_CLIENT_SECRET=St8rlingX
Where,
CAThe
system-local-ca.crtfile created in previous step.FLOATING_OAM_IPThe StarlingX system’s floating OAM IP address. Get this from the StarlingX system with system addrpool-list.
OIDC_CLIENT_ID / OIDC_CLIENT_SECRETThe values which are configured with Helm overrides for oidc-auth-apps. Get these from the StarlingX system with system helm-override-show oidc-auth-apps dex kube-system using the
staticClient.idandstaticClient.secretfields.
Create the
local-oidc-login-kubeconfig.ymlfile.cat << EOF > local-oidc-login-kubeconfig.yml apiVersion: v1 clusters: - cluster: certificate-authority: ${CA} server: https://${FLOATING_OAM_IP}:6443 name: kubernetes contexts: - context: cluster: kubernetes name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - user: exec: apiVersion: client.authentication.k8s.io/v1 command: kubectl args: - oidc-login - get-token - --oidc-issuer-url=https://${FLOATING_OAM_IP}:30556/dex - --oidc-client-id=${OIDC_CLIENT_ID} - --oidc-client-secret=${OIDC_CLIENT_SECRET} - --certificate-authority=${CA} - --oidc-extra-scope=openid,profile,email,groups - --skip-open-browser - --oidc-redirect-url=http://${FLOATING_OAM_IP}:8000 - --listen-address=0.0.0.0:8000 interactiveMode: IfAvailable EOF
Verify the configuration by running the following command:
KUBECONFIG=local-oidc-login-kubeconfig.yml kubectl get pods -n kube-system Please visit the following URL in your browser: http://128.224.48.105:8000/
Open the link in your browser. Type user, password, and the OTP token. Once authenticated in the browser, get back to the command line, and it should be authenticated.
Remote Standalone kubectl¶
You can use kubectl remotely in a standalone environment. For installation details, refer to the official Kubernetes documentation.
Install the oidc-login plugin using the kubectl naming convention (kubectl-<plugin-name>).
mkdir oidc-login cd oidc-login wget https://github.com/int128/kubelogin/releases/download/v1.34.2/kubelogin_linux_amd64.zip unzip kubelogin_linux_amd64.zip mv kubelogin /usr/local/bin/kubectl-oidc_login
Create remote OIDC login Kubeconfig.
Set the following variables with your cluster-specific values:
CA=system-local-ca.crt FLOATING_OAM_IP=128.224.49.105 OIDC_CLIENT_ID=stx-oidc-client-app OIDC_CLIENT_SECRET=St8rlingX
Create the
remote-kubectl-oidc-login-kubeconfig.ymlfile.cat << EOF > remote-kubectl-oidc-login-kubeconfig.yml apiVersion: v1 clusters: - cluster: certificate-authority: ${CA} server: https://${FLOATING_OAM_IP}:6443 name: kubernetes contexts: - context: cluster: kubernetes name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - user: exec: apiVersion: client.authentication.k8s.io/v1 command: kubectl args: - oidc-login - get-token - --oidc-issuer-url=https://${FLOATING_OAM_IP}:30556/dex - --oidc-client-id=${OIDC_CLIENT_ID} - --oidc-client-secret=${OIDC_CLIENT_SECRET} - --certificate-authority=${CA} - --oidc-extra-scope=openid,profile,email,groups interactiveMode: IfAvailable EOF
Verify the configuration by running the following command:
KUBECONFIG=remote-kubectl-oidc-login-kubeconfig.yml kubectl get pods -n kube-system Please visit the following URL in your browser: http://localhost:8000
Note
The default web browser will pop up automatically. If not, open the link manually. Type user, password, and the OTP token. Once authenticated in the browser, get back to the command line, and it should be authenticated.
Remote kubectl in remote_cli¶
Prerequisites
Configure remote_cli as described in Configure Container-backed Remote CLIs.
Procedure
Create Kubeconfig for remote_cli.
Be on the working directory for remote_cli:
cd $HOME/remote_cli_wd
Ensure that
system-local-ca.crtis in the working directory. If not, copy it in the working directory:~/remote_cli_wd$ ls $HOME/remote_cli_wd/system-local-ca.crt /home/user/remote_cli_wd/system-local-ca.crt
Set the following variables with your cluster-specific values:
CA=system-local-ca.crt FLOATING_OAM_IP=128.224.49.105 OIDC_CLIENT_ID=stx-oidc-client-app OIDC_CLIENT_SECRET=St8rlingX
Create the
remotecli-oidc-login-kubeconfig.ymlfile:cat << EOF > remotecli-kubectl-oidc-login-kubeconfig.yml apiVersion: v1 clusters: - cluster: certificate-authority: ${CA} server: https://${FLOATING_OAM_IP}:6443 name: kubernetes contexts: - context: cluster: kubernetes name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - user: exec: apiVersion: client.authentication.k8s.io/v1 command: kubectl args: - oidc-login - get-token - --oidc-issuer-url=https://${FLOATING_OAM_IP}:30556/dex - --oidc-client-id=${OIDC_CLIENT_ID} - --oidc-client-secret=${OIDC_CLIENT_SECRET} - --certificate-authority=${CA} - --oidc-extra-scope=openid,profile,email,groups - --token-cache-dir=cache - --skip-open-browser interactiveMode: IfAvailable EOF
Verify the configuration by running the following command:
alias "kubectl"="${PATH_TO_SCRIPT}/client_wrapper.sh kubectl --kubeconfig remotecli-kubectl-oidc-login-kubeconfig.yml" kubectl get pods -n kube-system Please visit the following URL in your browser: http://localhost:8000/Open the link in your browser. Enter user, password, and the OTP token. Once authenticated in the browser, get back to the command line, and it should be authenticated.
Cache Behaviour¶
There are two places where cache is stored: the command line where kubectl is run and the local browser where the user enters user, password, and the OTP token.
If you want to effectively clear the cache for development or testing purposes or switching between users, it is necessary to clear the two caches.
Clear Cache in Command Line¶
Local kubectl¶
kubectl oidc-login clean
Deleted the token cache from /home/user/.kube/cache/oidc-login
Deleted the token cache from the keyring
Remote Standalone kubectl¶
kubectl oidc-login clean
Deleted the token cache from /home/user/.kube/cache/oidc-login
Deleted the token cache from the keyring
Remote kubectl in remote_cli¶
Remote CLI is a containerized environment and it requires different handling.
cd $HOME/remote_cli_wd
sudo rm -rf cache/
Clear Cache in the Browser¶
For the default web browser in the system used for authentication with kubectl oidc-login, it is necessary to either clear the cookies in the external IdP domain or clear all the cookies.
The details of how to do it in each browser are found on Google or the official documentation for each browser. It is beyond the scope of this document to go on further details about this.