Kubernetes Authentication & Authorization¶
Kubernetes API User Authentication Using LDAP Server¶
Overview of LDAP Servers¶
StarlingX can be configured to use an LDAP compatible server, like a remote Windows Active Directory server or the Local LDAP server, to authenticate users of the Kubernetes API, using the oidc-auth-apps application.
The Local LDAP server is present in StarlingX deploys. This server runs on the controllers. The only exception is the DC environments, where this LDAP server runs only on the SystemController’s controllers, it is not present in the subcloud’s controllers.
The oidc-auth-apps application installs a proxy OIDC identity provider that can be configured to proxy authentication requests to an LDAP’s identity provider, such as Windows Active Directory or Local LDAP. For more information, see https://github.com/dexidp/dex. The oidc-auth-apps application also provides an OIDC client for accessing the username and password OIDC login page for user authentication and retrieval of tokens. An oidc-auth CLI script can also be used for OIDC user authentication and retrieval of tokens.
In addition to installing and configuring the oidc-auth-apps application, the admin must also configure Kubernetes cluster’s kube-apiserver to use the oidc-auth-apps OIDC identity provider for validation of tokens in Kubernetes API requests.
Centralized vs Distributed OIDC Authentication Setup¶
In a Distributed Cloud configuration, you can configure OIDC authentication in a distributed or centralized setup. For other configurations, like AIO-SX, AIO-DX or Standard Cloud, follow the instructions in the distributed setup documented below.
Distributed Setup¶
For a distributed setup, configure the kube-apiserver and the oidc-auth-apps independently for each cloud, System Controller, and all subclouds. The oidc-auth-apps runs on each active controller of the setup and the kube-apiserver is configured to point to the local instance of oidc-auth-apps. For more information, see:
Configure Kubernetes for OIDC Token Validation
All clouds oidc-auth-apps can be configured to communicate to the same or different authentication servers (Windows Active Directory and/or LDAP). However, each cloud manages OIDC tokens individually. A user must login, authenticate, and get an OIDC token for each cloud independently.
Centralized Setup¶
For a centralized setup, the oidc-auth-apps is configured ‘only’ on the System Controller. The kube-apiserver must be configured on all clouds, System Controller, and all subclouds, to point to the centralized oidc-auth-apps running on the System Controller. In the centralized setup, a user logs in, authenticates, and gets an OIDC token from the Central System Controller’s OIDC identity provider, and uses the OIDC token with ‘any’ of the subclouds as well as the System Controller cloud.
For a centralized OIDC authentication setup, use the following procedure:
Procedure
Configure the kube-apiserver parameters on the System Controller and each subcloud either during bootstrapping or by using the system service-parameter-add kubernetes kube_apiserver command after bootstrapping the system, using the System Controller’s floating OAM IP address as the oidc-issuer-url for all clouds.
For example,
oidc-issuer-url=https://<central-cloud-floating-ip>:<oidc-auth-apps-dex -service-NodePort>/dexon the subcloud.For more information, see:
Configure the oidc-auth-apps only on the System Controller. For more information, see Configure OIDC Auth Applications
Postrequisites
For more information on configuring Users, Groups, Authorization, and kubectl for the user and retrieving the token on subclouds, see:
Configure Kubernetes for OIDC Token Validation while Bootstrapping the System¶
You must configure the Kubernetes cluster’s kube-apiserver to use the oidc-auth-apps OIDC identity provider for validation of tokens in Kubernetes API requests, which use OIDC authentication.
About this task
Complete these steps to configure Kubernetes for OIDC token validation during bootstrapping and deployment.
The values set in this procedure can be changed at any time using service parameters as described in Configure Kubernetes for OIDC Token Validation after Bootstrapping the System.
Procedure
Configure the Kubernetes cluster kube-apiserver by adding the following parameters to the localhost.yml file, during bootstrap:
# cd ~ # cat <<EOF > /home/sysadmin/localhost.yml apiserver_oidc: client_id: stx-oidc-client-app issuer_url: https://<oam-floating-ip>:<oidc-auth-apps-dex-service-NodePort>/dex username_claim: email groups_claim: groups EOF
where:
<oidc-auth-apps-dex-service-NodePort>
is the port to be configured for the NodePort service for dex in oidc-auth-apps. The default is 30556.
The values of the username_claim, and groups_claim parameters could vary for different user and groups configurations in your Windows Active Directory or LDAP server.
Note
For IPv6 deployments, ensure that the IPv6 OAM floating address in the issuer_url is,
https://\[<oam-floating-ip>\]:30556/dex(that is, in lower case, and wrapped in square brackets).
For more information on OIDC Authentication for subclouds, see Centralized vs Distributed OIDC Authentication Setup.
Configure Kubernetes for OIDC Token Validation after Bootstrapping the System¶
You must configure the Kubernetes cluster’s kube-apiserver to use the oidc-auth-apps OIDC identity provider for validation of tokens in Kubernetes API requests, which use OIDC authentication.
About this task
As an alternative to performing this configuration at bootstrap time as described in Configure Kubernetes for OIDC Token Validation while Bootstrapping the System, you can do so at any time using service parameters.
Procedure
Set the following service parameters using the system service-parameter-add kubernetes kube_apiserver command.
For example:
~(keystone_admin)]$ system service-parameter-add kubernetes kube_apiserver oidc-client-id=stx-oidc-client-app ~(keystone_admin)]$ system service-parameter-add kubernetes kube_apiserver oidc-issuer-url=https://${OAMIP}:<oidc-auth-apps-dex-service-NodePort>/dex ~(keystone_admin)]$ system service-parameter-add kubernetes kube_apiserver oidc-username-claim=email ~(keystone_admin)]$ system service-parameter-add kubernetes kube_apiserver oidc-groups-claim=groupswhere:
<oidc-auth-apps-dex-service-NodePort> is the port to be configured for the NodePort service for dex in oidc-auth-apps. The default is 30556.
The values of the oidc-username-claim, and oidc-groups-claim parameters could vary for different user and groups configurations in your Windows Active Directory or LDAP server.
Note
For IPv6 deployments, ensure that the IPv6 OAM floating address is,
https://\[<oam-floating-ip>\]:30556/dex(that is, in lower case, and wrapped in square brackets).The valid combinations of these service parameters are:
none of the parameters
oidc-issuer-url, oidc-client-id, and oidc-username-claim
oidc-issuer-url, oidc-client-id, oidc-username-claim, and oidc-groups-claim
Note
Historical service parameters for OIDC with underscores are still accepted: oidc_client_id, oidc_issuer_url, oidc_username_claim and oidc_groups_claim. These are equivalent to: oidc-client-id, oidc-issuer-url, oidc-username-claim and oidc-groups-claim.
Apply the service parameters.
~(keystone_admin)]$ system service-parameter-apply kubernetes
For more information on OIDC Authentication for subclouds, see Centralized vs Distributed OIDC Authentication Setup.
Set up OIDC Auth Applications¶
The oidc-auth-apps is a system-level application that serves as an OIDC
Identity Provider for authenticating users accessing the Kubernetes API. It
functions as a proxy OIDC provider powered by dexidp.io and can be configured
to forward authentication requests to various types of upstream Identity
Providers. In the current StarlingX environment, it supports proxying to LDAP-based
Identity Providers, including StarlingX’s Local LDAP server and/or remote Windows
Active Directory servers.
In this section, the procedure below shows examples for configuring
oidc-auth-apps to proxy requests to a remote Windows Active Directory
server and/or to StarlingX’s Local LDAP server.
The oidc-auth-apps is packaged in the ISO and uploaded by default.
Configure OIDC Auth Applications¶
Prerequisites
You must have configured the Kubernetes
kube-apiserverto use the oidc-auth-apps OIDC identity provider for validation of tokens in Kubernetes API requests, which use OIDC authentication. For more information on configuring the Kuberneteskube-apiserver, see Configure Kubernetes for OIDC Token Validation while Bootstrapping the System or Configure Kubernetes for OIDC Token Validation after Bootstrapping the System.
Procedure
Create certificates for OIDC Servers.
Certificates used by
oidc-auth-appscan be managed by Cert-Manager. Doing so will automatically renew the certificates before they expire. Thesystem-local-caClusterIssuer (see System Local CA Issuer) will be used to issue this certificate.Note
If a signing CA is not a well-known trusted CA, you must ensure the system trusts the CA by specifying it either during the bootstrap phase of system installation, by specifying
ssl_ca_cert: <certificate_file>in the ansible bootstrap overrideslocalhost.ymlfile, or by using the system ca-certificate-install command.Also refer to Add a Trusted CA for installing a root CA, which includes instruction to lock/unlock controller nodes when using system ca-certificate-install command.
Important
The namespace for
oidc-auth-appsmust bekube-system.Create the OIDC client and identity provider server certificate and private key pair.
~(keystone_admin)]$ cat <<EOF > oidc-auth-apps-certificate.yaml --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: oidc-auth-apps-certificate namespace: kube-system spec: secretName: oidc-auth-apps-certificate duration: 2160h # 90 days renewBefore: 360h # 15 days issuerRef: name: system-local-ca kind: ClusterIssuer commonName: <OAM_floating_IP_address> subject: organizations: - ABC-Company organizationalUnits: - StarlingX-system-oidc-auth-apps ipAddresses: - <OAM_floating_IP_address> EOFNote
The Certificate usage of Cert-manager Documentation (https://cert-manager.io/docs/usage/certificate/) states that one should “Take care when setting the
renewBeforefield to be very close to the duration as this can lead to a renewal loop, where the Certificate is always in the renewal period.”In the light of the statement above, you must not set
renewBeforeto a value very close to the “duration” value, such as a renewBefore of 29 days and a duration of 30 days. Instead, you could set values such as renewBefore=15 days and duration=30 days to avoid renewal loops.Apply the configuration.
~(keystone_admin)]$ kubectl apply -f oidc-auth-apps-certificate.yaml
Verify the configuration.
~(keystone_admin)]$ kubectl get certificate oidc-auth-apps-certificate –n kube-system
Configure the OIDC-client with both the OIDC Client and Identity Server Certificate and the OIDC Client and Identity Trusted CA certificate.
Configure the certificate of the root CA that signed the OIDC client and identity provider’s server certificate. In this example, it will be the
ca.crtof theoidc-auth-apps-certificate(ClusterIssuer).~(keystone_admin)]$ cat <<EOF > stx-oidc-client.yaml tlsName: oidc-auth-apps-certificate config: # The OIDC-client container mounts the dex-ca-cert secret at /home, therefore # issuer_root_ca: /home/<filename-only-of-generic-secret> issuer_root_ca: /home/ca.crt issuer_root_ca_secret: oidc-auth-apps-certificate EOF ~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --values stx-oidc-client.yaml
Create a secret with the certificate of the CA that signed the certificate of the Identity Providers (IdPs) that you will be using.
If you will use a WAD server, create the secret
wad-ca-certwith the CA’s certificate that signed the Active Directory’s certificate using the command below.~(keystone_admin)]$ kubectl create secret generic wad-ca-cert --from-file=wad-ca-cert.crt -n kube-system
If you will use the Local LDAP server, use the Root CA data from
oidc-auth-apps-certificate, since it is the same Root CA that signs the Local LDAP certificate (system-local-ca).The secrets
wad-ca-certand/oroidc-auth-apps-certificatewill be used later in the application overrides.Configure the secret observer to track changes.
Change the cronSchedule according to your needs. The cronSchedule controls how often the application checks to see if the certificate mounted on the dex and oidc-client pods had changed.
Create a YAML configuration to modify the cronSchedule according to your needs.
The cronSchedule controls how often the application checks to see if the certificate mounted on the dex and oidc-client pods changed. The following example sets the schedule to every 15 minutes.
~(keystone_admin)]$ cat <<EOF > secret-observer-overrides.yaml cronSchedule: "*/15 * * * *" observedSecrets: - secretName: "oidc-auth-apps-certificate" filename: "ca.crt" deploymentToRestart: "stx-oidc-client" - secretName: "oidc-auth-apps-certificate" filename: "tls.crt" deploymentToRestart: "stx-oidc-client" - secretName: "oidc-auth-apps-certificate" filename: "tls.crt" deploymentToRestart: "oidc-dex" EOFExecute the following command to update the overrides:
~(keystone_admin)]$ system helm-override-update oidc-auth-apps secret-observer kube-system --values secret-observer-overrides.yaml
Specify user overrides for oidc-auth-apps application, by using the following command:
~(keystone_admin)]$ system helm-override-update oidc-auth-apps dex kube-system --values /home/sysadmin/dex-overrides.yaml
The dex-overrides.yaml file contains the desired dex helm chart overrides (that is, the LDAP connector configuration for the Active Directory service, optional token expiry, and so on), and volume mounts for providing access to the
wad-ca-certsecret and/or to thelocal-ldap-ca-cert, described in this section. See examples of thedex-overrides.yamlfile for a WAD backend/connector and a Local LDAP backend/connector below.For the complete list of dex helm chart values supported, see Dex Helm Chart Values. For the complete list of parameters of the dex LDAP connector configuration, see Authentication Through LDAP.
The overall Dex documentation is available on dexidp.io. The configuration of dex server version v2.42.0 is described on github (https://github.com/dexidp/dex/blob/v2.42.0/config.yaml.dist) with example
config.dev.yaml(https://github.com/dexidp/dex/blob/v2.42.0/config.dev.yaml).The examples below configure a token expiry of ten hours, the LDAP connectors to the remote servers using HTTPS (LDAPS) using the servers CA secrets, the required remote servers login information (that is, bindDN, and bindPW), and example userSearch, and groupSearch clauses.
For only a WAD server, the configuration is shown below.
config: staticClients: - id: stx-oidc-client-app name: STX OIDC Client app redirectURIs: ['https://<OAM floating IP address>:30555/callback'] secret: BetterSecret expiry: idTokens: "10h" connectors: - type: ldap name: WAD id: wad-1 config: host: pv-windows-acti.windows-activedir.example.com:636 rootCA: /etc/ssl/certs/adcert/wad-ca-cert.crt insecureNoSSL: false insecureSkipVerify: false bindDN: cn=Administrator,cn=Users,dc=windows-activedir,dc=example,dc=com bindPW: [<password>] usernamePrompt: Username userSearch: baseDN: ou=Users,ou=Titanium,dc=windows-activedir,dc=example,dc=com filter: "(objectClass=user)" username: sAMAccountName idAttr: sAMAccountName emailAttr: sAMAccountName nameAttr: displayName groupSearch: baseDN: ou=Groups,ou=Titanium,dc=windows-activedir,dc=example,dc=com filter: "(objectClass=group)" userMatchers: - userAttr: DN groupAttr: member nameAttr: cn volumeMounts: - mountPath: /etc/ssl/certs/adcert name: certdir - mountPath: /etc/dex/tls name: https-tls volumes: - name: certdir secret: secretName: wad-ca-cert - name: https-tls secret: defaultMode: 420 secretName: oidc-auth-apps-certificateFor only the Local LDAP server, the configuration is shown below. The value of bindPW can be retrieved through keyring get ldap ldapadmin command executed in the controller where the Local LDAP server is running. In DC environments, the MGMT floating IP address to be used is the one from the SystemController.
cat <<EOF > dex-overrides.yaml config: staticClients: - id: stx-oidc-client-app name: STX OIDC Client app secret: St8rlingX redirectURIs: - https://<OAM floating IP address>:30555/callback expiry: idTokens: "10h" connectors: - type: ldap name: LocalLDAP id: localldap-1 config: host: <MGMT floating IP address>:636 rootCA: /etc/ssl/certs/adcert/ca.crt insecureNoSSL: false insecureSkipVerify: false bindDN: CN=ldapadmin,DC=cgcs,DC=local bindPW: [<password>] usernamePrompt: Username userSearch: baseDN: ou=People,dc=cgcs,dc=local filter: "(objectClass=posixAccount)" username: uid idAttr: DN emailAttr: uid nameAttr: gecos groupSearch: baseDN: ou=Group,dc=cgcs,dc=local filter: "(objectClass=posixGroup)" userMatchers: - userAttr: uid groupAttr: memberUid nameAttr: cn volumeMounts: - mountPath: /etc/ssl/certs/adcert name: certdir - mountPath: /etc/dex/tls name: https-tls volumes: - name: certdir secret: secretName: oidc-auth-apps-certificate - name: https-tls secret: defaultMode: 420 secretName: oidc-auth-apps-certificateIf both WAD and Local LDAP servers are used at same time, use the examples above with the connectors from WAD and Local LDAP in the same
connectorslist while thevolumesto be used is the one written below.volumes: - name: certdir projected: sources: - secret: name: wad-ca-cert - secret: name: local-ldap-ca-cert - name: https-tls secret: defaultMode: 420 secretName: oidc-auth-apps-certificateIf more than one Windows Active Directory service is required for authenticating the different users of the StarlingX, multiple
ldaptype connectors can be configured; one for each Windows Active Directory service.If more than one
userSearchplusgroupSearchclauses are required for the same Windows Active Directory service, multipleldaptype connectors, with the same host information but differentuserSearchplusgroupSearchclauses, should be used.Whenever you use multiple
ldaptype connectors, ensure you use uniquename:andid:parameters for each connector.(Optional) There is a default internal secret in the above dex configuration for the
stx-oidc-client-appStaticClient. This internal secret is used between theoidc-clientcontainer and the dex container. It is recommended that you configure a unique, more secure password. You can change this using helm overrides. For example, to change the secret, first run the following command to see the default settings. In this example,10.10.10.2is the StarlingX OAM floating IP address.~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system config: staticClients: - id: stx-oidc-client-app name: STX OIDC Client app redirectURIs: ['https://10.10.10.2:30555/callback'] secret: St8rlingXChange the secret from the output and copy the entire configuration section shown above in to your dex overrides file shown in the example below.
Warning
Do not forget to include the id, name, and redirectURIs parameters.
An override of the secret in the staticClients section of the dex helm chart in previous step must be accompanied by an override in the oidc-client helm chart.
The following override is sufficient for changing the secret in the
/home/sysadmin/oidc-client-overrides.yamlfile.config: client_secret: BetterSecret
Apply the oidc-client overrides using the following command:
~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --values /home/sysadmin/oidc-client-overrides.yaml --reuse-values
Note
If you need to manually override the secrets, the
client_secretin the oidc-client overrides must match thestaticClientssecretin the dex overrides, otherwise the oidc-auth CLI client will not function.Use the system application-apply command to apply the configuration:
~(keystone_admin)]$ system application-apply oidc-auth-apps
Default helm overrides for oidc-auth-apps application¶
For backwards compatibility reasons, the default helm overrides for dex helm are:
Note
It is NOT recommended to use these; it is recommended to create
certificates using cert-manager and explicitly refer to the resulting
certificate secrets in user-specified helm overrides, as described on the
procedure above.
image:
repository: ghcr.io/dexidp/dex
pullPolicy: IfNotPresent
tag: v2.40.0
imagePullSecrets:
- name: default-registry-key
env:
name: KUBERNETES_POD_NAMESPACE
value: kube-system
config:
issuer: https://<OAM_IP>:30556/dex
staticClients:
- id: stx-oidc-client-app
name: STX OIDC Client app
secret: St8rlingX
redirectURIs:
- https://<OAM_IP>:30555/callback
enablePasswordDB: false
web:
tlsCert: /etc/dex/tls/tls.crt
tlsKey: /etc/dex/tls/tls.key
storage:
type: kubernetes
config:
inCluster: true
oauth2:
skipApprovalScreen: true
logger:
level: debug
service:
type: NodePort
ports:
https:
nodePort: 30556
https:
enabled: true
grpc:
enabled: false
nodeSelector:
node-role.kubernetes.io/control-plane: ""
volumeMounts:
- mountPath: /etc/dex/tls/
name: https-tls
volumes:
- name: https-tls
secret:
defaultMode: 420
secretName: local-dex.tls
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
podLabels:
app: dex
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- dex
topologyKey: kubernetes.io/hostname
The default helm overrides for oidc-client are:
config:
client_id: stx-oidc-client-app
client_secret: St8rlingX
issuer: https://<OAM_IP>:30556/dex
issuer_root_ca: /home/dex-ca.pem
issuer_root_ca_secret: dex-client-secret
listen: https://0.0.0.0:5555
redirect_uri: https://<OAM_IP>:30555/callback
tlsCert: /etc/dex/tls/https/server/tls.crt
tlsKey: /etc/dex/tls/https/server/tls.key
nodeSelector:
node-role.kubernetes.io/control-plane: ""
service:
type: NodePort
port: 5555
nodePort: 30555
replicas: <replicate count>
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- stx-oidc-client
topologyKey: kubernetes.io/hostname
helmv3Compatible: true
The default helm overrides for secret-observer are:
namespace: "kube-system"
observedSecrets:
- secretName: "dex-client-secret"
filename: "dex-ca.pem"
deploymentToRestart: "stx-oidc-client"
- secretName: "local-dex.tls"
filename: "tls.crt"
deploymentToRestart: "stx-oidc-client"
- secretName: "local-dex.tls"
filename: "tls.crt"
deploymentToRestart: "oidc-dex"
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
Configure Kubernetes Client Access¶
You can configure Kubernetes access for local and remote clients to authenticate through Windows Active Directory or LDAP server using oidc-auth-apps OIDC Identity Provider (dex).
Configure Kubernetes Local Client Access¶
About this task
Use the procedure below to configure Kubernetes access for a user logged in to the active controller either through SSH or by using the system console.
Note
If the user ssh/console access is to be authenticated using an External WAD or LDAP server, refer also to SSH Authentication.
Procedure
Execute the commands below to create the Kubernetes configuration file for the logged in user. These commands only need to be executed once. The file “~/.kube/config” will be created. The user referred in its contents is the current logged in user.
~$ kubeconfig-setup ~$ source ~/.profile
Run oidc-auth script in order to authenticate and update user credentials in the Kubernetes configuration file.
~$ oidc-auth
Note
The oidc-auth script has the following optional parameters that may need to be specified:
--cacert <path to ca cert file>: The path provides CA certificate that validates the server certificate specified by the-coption. By default, the command reads the value ofOS_CACERTenvironment variable. If none is specified, the command accesses the server without verifying its certificate.-c <OIDC_app_IP>: This is the IP where the OIDC app is running. When not provided, it defaults to “oamcontroller”, that is an alias to the controller floating OAM IP. There are two instances where this parameter is used: for local client access inside subclouds of a centralized setup, where the oidc-auth-apps runs only on the System Controller, and for remote client access.-p <password>: This is the user password. If the user does not enter the password, the user is prompted to do so. This parameter is essential in non-interactive shells.-u <username>: This is the user to be authenticated. When not provided, it defaults to the current logged in user. Usually, this parameter is needed in remote client access scenarios, where the current logged in user is different from the user to be authenticated.-b <backend_ID>: This parameter is used to specify the backend used for authentication. It is only needed if there is more than one backend configured at oidc-auth-apps OIDC Identity Provider (Dex).
Configure Kubernetes Remote Client Access¶
Kubernetes Remote Client Access using the Host Directly¶
Procedure
Install the kubectl client CLI on the host. Follow the instructions on Install and Set Up kubectl on Linux. The example below can be used for Ubuntu.
% sudo apt-get update % sudo apt-get install -y apt-transport-https % curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add % echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list % sudo apt-get update % sudo apt-get install -y kubectl
Contact your system administrator for the StarlingX system-local-ca CA certificate. Copy this certificate to your system as
stx-ca.crt.Create an empty Kubernetes configuration file (the default path is
~/.kube/config). Run the commands below to update this file. Use the OAM IP address and the system-local-ca CA certificate acquired in the previous step. If the OAM IP is IPv6, use the IP enclosed in brackets (example:[fd00::a14:803]). In the example below, the user isadmin-user. Change it to the name of user you want to authenticate.$ MYUSER="admin-user" $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 $ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 stx-ca.crt) $ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER} $ kubectl config use-context ${MYUSER}@wrcpclusterGet a Kubernetes authentication token. There are two options, the first is through oidc-auth script and the second is through the browser. Both options are described below.
To get the token through oidc-auth script, execute the steps below.
Install “Python Mechanize” module using the following command:
$ sudo pip install mechanize
Install the oidc-auth from a StarlingX mirror.
Execute the command below to get the token and update it in the Kubernetes configuration file. If the target environment has multiple backends configured, you will need to use the parameter
-b <backend_ID>. If the target environment is a DC system with a centralized setup, you should use the OAM IP of the System Controller.$ oidc-auth -u ${MYUSER} -c <OAM_IP>
To get the token through a browser, execute the steps below.
Use the following URL to login into oidc-auth-apps OIDC client:
https://<oam-floating-ip-address>:30555. If the target environment is a DC system with a centralized setup, you should use the OAM IP of the System Controller.If the StarlingX oidc-auth-apps has been configured for multiple ‘ldap’ connectors, select the Windows Active Directory or the LDAP server for authentication.
Enter your Username and Password.
Click Login. The ID token and Refresh token are displayed as follows:
ID Token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjQ4ZjZkYjcxNGI4ODQ5ZjZlNmExM2Y2ZTQzODVhMWE1MjM0YzE1NTQifQ.eyJpc3MiOiJodHRwczovLzEyOC4yMjQuMTUxLjE3MDozMDU1Ni9kZXgiLCJzdWIiOiJDZ2R3ZG5SbGMzUXhFZ1JzWkdGdyIsImF1ZCI6InN0eC1vaWRjLWNsaWVudC1hcHAiLCJleHAiOjE1ODI1NzczMTksImlhdCI6MTU4MjU3NzMwOSwiYXRfaGFzaCI6ImhzRG1kdTFIWGFCcXFNLXBpYWoyaXciLCJlbWFpbCI6InB2dGVzdDEiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6InB2dGVzdDEifQ.TEZ-YMd8kavTGCw_FUR4iGQWf16DWsmqxW89ZlKHxaqPzAJUjGnW5NRdRytiDtf1d9iNIxOT6cGSOJI694qiMVcb-nD856OgCvU58o-e3ZkLaLGDbTP2mmoaqqBYW2FDIJNcV0jt-yq5rc9cNQopGtFXbGr6ZV2idysHooa7rA1543EUpg2FNE4qZ297_WXU7x0Qk2yDNRq-ngNQRWkwsERM3INBktwQpRUg2na3eK_jHpC6AMiUxyyMu3o3FurTfvOp3F0eyjSVgLqhC2Rh4xMbK4LgbBTN35pvnMRwOpL7gJPgaZDd0ttC9L5dBnRs9uT-s2g4j2hjV9rh3KciHQ Access Token: wcgw4mhddrk7jd24whofclgmj Claims: { "iss": "https://128.224.151.170:30556/dex", "sub": "CgdwdnRlc3QxEgRsZGFw", "aud": "stx-oidc-client-app", "exp": 1582577319, "iat": 1582577319, "at_hash": "hsDmdu1HXaBqqM-piaj2iw", "email": "testuser", "email_verified": true, "groups": [ "billingDeptGroup", "managerGroup" ], "name": "testuser" } Refresh Token: ChljdmoybDZ0Y3BiYnR0cmp6N2xlejNmd3F5Ehlid290enR5enR1NWw1dWM2Y2V4dnVlcHliUse the token ID to set the Kubernetes credentials in kubectl configs:
$ TOKEN=<ID_token_string> $ kubectl config set-credentials ${MYUSER} --token ${TOKEN}
Kubernetes API User Authorization For LDAP Server¶
Configure Users, Groups, and Authorization¶
In the examples provided below, Kubernetes permissions will be given to daveuser user. Two different ways to do this are presented: in the first option, daveuser user is directly bound to a role; in the second option, daveuser is indirectly associated to a Kubernetes group that has permissions.
Note
For larger environments, like a DC with many subclouds, or to minimize Kubernetes custom cluster configurations, use the second option, where permissions are granted through Kubernetes groups. Apply the kubernetes RBAC policy to the central cloud and to each subcloud where kubernetes permissions are required.
Grant Kubernetes permissions through direct role binding¶
Create the following deployment file and deploy the file with kubectl apply -f <filename>.
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: daveuser-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: daveuser
Grant Kubernetes permissions through groups¶
Create the following deployment file and deploy the file with kubectl apply -f <filename>.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-reader-role rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-reader-rolebinding subjects: - kind: Group name: k8s-reader apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-reader-role apiGroup: rbac.authorization.k8s.io --- # Note: the ClusterRole "cluster-admin" already exists in the system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-admin-rolebinding subjects: - kind: Group name: k8s-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
Create the groups k8s-reader and k8s-admin in your Windows Active Directory or LDAP server. See Microsoft documentation on Windows Active Directory for additional information on adding users and groups to Windows Active Directory.
To give Kubernetes permissions to daveuser, add this user in either the k8s-reader or k8s-admin groups in your Windows Active Directory or LDAP server, depending on the permissions you want to grant. The permissions are given because there is a mapping between a Windows Active Directory or LDAP group and a Kubernetes group with same name. To remove Kubernetes permissions from daveuser user, remove this user from k8s-reader and k8s-admin groups in your Windows Active Directory or LDAP server.
Note
The group names k8s-reader and k8s-admin are arbitrary. As long as the Windows Active Directory or LDAP group have the same name as the Kubernetes group, the mapping will happen. For example, if a more company-specific approach is preferred, the groups k8s-reader and k8s-admin groups could be named after departments, like billingDeptGroup and managerGroup.
Private Namespace and Restricted RBAC¶
A non-admin type user typically does not have permissions for any cluster-scoped resources and only has read and/or write permissions to resources in one or more namespaces.
About this task
Note
All of the RBAC resources for managing non-admin type users, although they may apply to private namespaces, are created in kube-system such that only admin level users can manager non-admin type users, roles, and rolebindings.
The following example creates a non-admin service account called dave-user with read/write type access to a single private namespace (billing-dept-ns).
Note
The following example creates and uses ServiceAccounts as the user mechanism and subject for the rolebindings, however the procedure equally applies to user accounts defined in an external Windows Active Directory as the subject of the rolebindings.
Procedure
If it does not already exist, create a general user role defining restricted permissions for general users.
This is of the type ClusterRole so that it can be used in the context of any namespace when binding to a user.
Create the user role definition file.
% cat <<EOF > general-user-clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: general-user rules: # For the core API group (""), allow full access to all resource types # EXCEPT for resource policies (limitranges and resourcequotas) only allow read access - apiGroups: [""] resources: ["bindings", "configmaps", "endpoints", "events", "persistentvolumeclaims", "pods", "podtemplates", "replicationcontrollers", "secrets", "serviceaccounts", "services"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: [ "limitranges", "resourcequotas" ] verbs: ["get", "list"] # Allow full access to all resource types of the following explicit list of apiGroups. # Notable exceptions here are: # ApiGroup ResourceTypes # ------- ------------- # policy podsecuritypolicies, poddisruptionbudgets # networking.k8s.io networkpolicies # admissionregistration.k8s.io mutatingwebhookconfigurations, validatingwebhookconfigurations # - apiGroups: ["apps", "batch", "extensions", "autoscaling", "apiextensions.k8s.io", "rbac.authorization.k8s.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # Cert Manager API access - apiGroups: ["cert-manager.io", "acme.cert-manager.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] EOFApply the definition.
~(keystone_admin)$ kubectl apply -f general-user-cluster-role.yaml
Create the billing-dept-ns namespace, if it does not already exist.
~(keystone_admin)$ kubectl create namespace billing-dept-ns
Create both the dave-user service account and the namespace-scoped RoleBinding.
The RoleBinding binds the general-user role to the dave-user ServiceAccount for the billing-dept-ns namespace.
Create the account definition file.
% cat <<EOF > dave-user.yaml apiVersion: v1 kind: ServiceAccount metadata: name: dave-user namespace: kube-system --- apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: dave-user-sa-token namespace: kube-system annotations: kubernetes.io/service-account.name: dave-user --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dave-user namespace: billing-dept-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: general-user subjects: - kind: ServiceAccount name: dave-user namespace: kube-system EOFApply the definition.
% kubectl apply -f dave-user.yaml
If the user requires use of the local docker registry, create an openstack user account for authenticating with the local docker registry.
If a project does not already exist for this user, create one.
% openstack project create billing-dept-ns
Create an openstack user in this project.
% openstack user create --password P@ssw0rd \ --project billing-dept-ns dave-user
Note
Substitute a password conforming to your password formatting rules for P@ssw0rd.
Create a secret containing these userid/password credentials for use as an ImagePullSecret
% kubectl create secret docker-registry registry-local-dave-user --docker-server=registry.local:9001 --docker-username=dave-user --docker-password=P@ssw0rd --docker-email=noreply@windriver.com -n billing-dept-ns
dave-user can now push images to registry.local:9001/dave-user/ and use these images for pods by adding the secret above as an ImagePullSecret in the pod spec.
If this user requires the ability to use helm, do the following.
Create a ClusterRole for reading namespaces, if one does not already exist.
% cat <<EOF > namespace-reader-clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: namespace-reader rules: - apiGroups: [""] resources: ["namespaces"] verbs: ["get", "watch", "list"] EOF
Apply the configuration.
% kubectl apply -f namespace-reader-clusterrole.yaml
Create a RoleBinding for the tiller account of the user’s namespace.
Note
The tiller account of the user’s namespace must be named ‘tiller’.
% cat <<EOF > read-namespaces-billing-dept-ns-tiller.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: read-namespaces-billing-dept-ns-tiller subjects: - kind: ServiceAccount name: tiller namespace: billing-dept-ns roleRef: kind: ClusterRole name: namespace-reader apiGroup: rbac.authorization.k8s.io EOF
Apply the configuration.
% kubectl apply -f read-namespaces-billing-dept-ns-tiller.yaml
Resource Management¶
Kubernetes supports two types of resource policies, LimitRange and ResourceQuota.
LimitRange¶
By default, containers run with unbounded resources on a Kubernetes cluster. Obviously this is bad as a single Pod could monopolize all available resources on a worker node. A LimitRange is a policy to constrain resource allocations (for Pods or Containers) in a particular namespace.
Specifically a LimitRange policy provides constraints that can:
Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
Enforce a ratio between request and limit for a resource in a namespace.
Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
See https://kubernetes.io/docs/concepts/policy/limit-range/ for more details.
An example of LimitRange policies for the billing-dept-ns namespace of the example in Private Namespace and Restricted RBAC is shown below:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-container-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "700Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
---
apiVersion: v1
kind: LimitRange
metadata:
name: mem-cpu-per-pod-limit
namespace: billing-dept-ns
spec:
limits:
- max:
cpu: "2"
memory: "2Gi"
type: Pod
---
apiVersion: v1
kind: LimitRange
metadata:
name: pvc-limit
namespace: billing-dept-ns
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 3Gi
min:
storage: 1Gi
---
apiVersion: v1
kind: LimitRange
metadata:
name: memory-ratio-pod-limit
namespace: billing-dept-ns
spec:
limits:
- maxLimitRequestRatio:
memory: 10
type: Pod
ResourceQuota¶
A ResourceQuota policy object provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project. ResourceQuota limits can be created for cpu, memory, storage and resource counts for all standard namespaced resource types such as secrets, configmaps, etc.
See https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more details.
An example of ResourceQuota policies for the billing-dept-ns namespace of Private Namespace and Restricted RBAC is shown below:
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-quotas
namespace: billing-dept-ns
spec:
hard:
persistentvolumeclaims: "1"
services.loadbalancers: "2"
services.nodeports: "0"
Pod Security Admission Controller¶
Pod Security Admission (PSA) Controller is the PSP replacement, and this document describes the PSA functionality, which is ‘beta’ quality in Kubernetes v1.24 .
The PSA controller acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the policies defined by Pod Security Standards.
Pod Security levels¶
Pod Security Admission levels refer to the 3 policies defined by the Pod Security Standards: privileged, baseline, and restricted.
- Privileged
Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations. It aims at system- and infrastructure-level workloads managed by privileged, trusted users.
- Baseline
Minimally restrictive policy which prevents known privilege escalations. It aims at ease of adoption for common containerized workloads for non-critical applications.
- Restricted
Heavily restricted policy, following current Pod hardening best practices. It is targeted at operators and developers of security-critical applications, as well as lower-trust users.
Pod Security Admission labels for namespaces¶
Pod security restrictions are applied at the namespace level.
With PSA feature enabled, namespaces can be configured to define the admission control mode to be used for pod security in each namespace. Kubernetes defines a set of labels to set predefined Pod Security levels for a namespace. The label will define what action the controller control plane takes if a potential violation is detected.
A namespace can configure any or all modes, or set different levels for different modes. The modes are:
- enforce
Policy violations will cause the pod to be rejected.
- audit
Policy violations will trigger the addition of an audit annotation to the event recorded in the Kubernetes audit log but are otherwise allowed.
- warn
Policy violations will trigger a user-facing warning but are otherwise allowed.
For each mode, there are two labels that determine the policy used.
This is a generic namespace configuration using labels.
# label indicates which policy level to apply for the mode.
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# LEVEL must be one of `privileged`, `baseline`, or `restricted`.
pod-security.kubernetes.io/<MODE>: <LEVEL>
# Optional: per-mode version label can be used to pin the policy to the
# version that shipped with a given Kubernetes minor version (e.g. v1.24).
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# VERSION must be a valid Kubernetes minor version, or `latest`.
pod-security.kubernetes.io/<MODE>-version: <VERSION>
For more information refer to https://kubernetes.io/docs/concepts/security/pod-security-admission/.
Enable Pod Security Admission¶
To enable PSA, Pod Security feature gate must be enabled.
Starting with Kubernetes 1.24 version, Pod Security feature gate is enabled by default.
For Kubernetes version 1.22, Pod Security feature gate can be enabled using
option feature-gates in bootstrap overrides file, localhost.yml. As the
example shown below:
apiserver_extra_args:
feature-gates: "TTLAfterFinished=true,HugePageStorageMediumSize=true,RemoveSelfLink=false,MemoryManager=true,PodSecurity=true"
See Kubernetes Custom Configuration for more details on kubernetes
configuration, apiserver_extra_args and apiserver_extra_volumes.
Configure defaults for the Pod Security Admission Controller¶
The PSA controller can be configured with default security polices and exemptions at bootstrap time.
The Default PSA controller configuration will apply to namespaces that are
not configured with the pod-security.kubernetes.io labels to specify a
security level and mode. For example if you display the namespace description
using kubectl describe namespace <namespace> and the
pod-security.kubernetes.io labels are not displayed, then the behavior of
the namespace will follow the default PSA labels’ level, mode and version
configuration set with Pod Security plugin of the AdmissionConfiguration
resource.
To configure cluster-wide default policies and/or exemptions, the
Pod Security plugin of the AdmissionConfiguration resource can be used. The
AdmissionConfiguration resource is configurable at bootstrap time with the
api-server_extra_args and apiserver_extra_volumes overrides in the
localhost.yml file.
Any policy that is applied via namespace labels will take precedence.
Example of configuration added to localhost.yml:
apiserver_extra_args:
admission-control-config-file: "/etc/kubernetes/admission-control-config-file.yaml"
apiserver_extra_volumes:
- name: admission-control-config-file
mountPath: "/etc/kubernetes/admission-control-config-file.yaml"
pathType: "File"
readOnly: true
content: |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "privileged"
enforce-version: "latest"
audit: "privileged"
audit-version: "latest"
warn: "privileged"
warn-version: "latest"
See Kubernetes Custom Configuration for more details on kubernetes
configuration, apiserver_extra_args and apiserver_extra_volumes.
The generic definition of the AdmissionConfiguration resource can be found
at
https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/.
Platform namespaces configuration¶
In preparation for PSA controller full support, namespace labels have been
added to all the namespaces used by the platform. System namespaces, such as
kube-system, deployment, as well as application namespaces such as,
cert-manager have been created by default with privileged label levels.
The following labels configuration is applied by default to Platform namespaces:
controller-0:~$ kubectl describe namespace kube-system
Name: kube-system
Labels: kubernetes.io/metadata.name=kube-system
pod-security.kubernetes.io/audit=privileged
pod-security.kubernetes.io/audit-version=latest
pod-security.kubernetes.io/enforce=privileged
pod-security.kubernetes.io/enforce-version=latest
pod-security.kubernetes.io/warn=privileged
pod-security.kubernetes.io/warn-version=latest
Annotations: <none>
Status: Active
No resource quota.
No LimitRange resource
Pod Security Admission Controller - Usage Example¶
This page walks through a usage example of PSA where you will:
Create a namespace for each of the 3 security policies levels: privileged, baseline and restricted.
Create a yaml file with a privileged pod configuration.
Create a privileged pod in all 3 namespaces.
The pod creation will only be successful in the privileged namespace.
controller-0:~$ vi baseline-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: baseline-ns
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: baseline
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: baseline
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f baseline-ns.yaml
controller-0:~$ vi privileged-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: privileged-ns
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: privileged
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f privileged-ns.yaml
controller-0:~$ vi restricted-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: restricted-ns
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.24
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: v1.24
controller-0:~$ kubectl apply -f restricted-ns.yaml
controller-0:~$ vi privileged-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: privileged
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
controller-0:~$ kubectl -n privileged-ns apply -f privileged-pod.yaml
pod/privileged created
controller-0:~$ kubectl -n baseline-ns apply -f privileged-pod.yaml
Error from server (Failure): error when creating "privileged-pod.yaml": privileged (container "pause" must not set securityContext.privileged=true)
controller-0:~$ kubectl -n restricted-ns apply -f privileged-pod.yaml
Error from server (Failure): error when creating "privileged-pod.yaml": privileged (container "pause" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "pause" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pause" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pause" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pause" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
controller-0:~$
For more information refer to https://kubernetes.io/docs/concepts/security/pod-security-admission/.
Deprovision LDAP Server Authentication & Authorization¶
You can remove Windows Active Directory or LDAP authentication from StarlingX.
Procedure
Remove the configuration of kube-apiserver to use oidc-auth-apps for authentication.
Determine the UUIDs of parameters used in the kubernetes kube-apiserver group.
These include oidc-client-id, oidc-groups-claim, oidc-issuer-url and oidc-username-claim.
~(keystone_admin)]$ system service-parameter-list
Delete each parameter.
~(keystone_admin)]$ system service-parameter-delete <UUID>
Apply the changes.
~(keystone_admin)]$ system service-parameter-apply kubernetes
Uninstall oidc-auth-apps.
~(keystone_admin)]$ system application-remove oidc-auth-apps
Clear the helm-override configuration.
~(keystone_admin)]$ system helm-override-update oidc-auth-apps dex kube-system --reset-values ~(keystone_admin)]$ system helm-override-show oidc-auth-apps dex kube-system ~(keystone_admin)]$ system helm-override-update oidc-auth-apps oidc-client kube-system --reset-values ~(keystone_admin)]$ system helm-override-show oidc-auth-apps oidc-client kube-system ~(keystone_admin)]$ system helm-override-update oidc-auth-apps secret-observer kube-system --reset ~(keystone_admin)]$ system helm-override-show oidc-auth-apps secret-observer kube-system
Remove secrets that contain certificate data. Depending on your configuration, some secrets listed below may not exist.
~(keystone_admin)]$ kubectl delete secret dex-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret oidc-auth-apps-certificate -n kube-system ~(keystone_admin)]$ kubectl delete secret wad-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret local-ldap-ca-cert -n kube-system ~(keystone_admin)]$ kubectl delete secret local-dex.tls -n kube-system ~(keystone_admin)]$ kubectl delete secret dex-client-secret -n kube-system
Remove any RBAC RoleBindings added for OIDC users and/or groups.
For example:
$ kubectl delete clusterrolebinding testuser-rolebinding $ kubectl delete clusterrolebinding billingdeptgroup-rolebinding
Kubernetes API User Access¶
Access Kubernetes CLI locally from SSH/Local Console Session¶
You can access the system via a local CLI from the active controller node’s local console or by SSH-ing to the OAM floating IP Address.
It is highly recommended that only ‘sysadmin’ and a small number of admin level user accounts be allowed to SSH to the system.
Using the sysadmin account and the Local CLI, you can perform all required system maintenance, administration and troubleshooting tasks.
For sysadmin Account¶
By default, the sysadmin account has Kubernetes Admin credentials.
If you plan on customizing the sysadmin’s kubectl configuration on the
StarlingX Controller, (for example, kubectl config set-... or
or oidc-auth), you should use a private KUBECONFIG file and NOT
the system-managed KUBECONFIG file /etc/kubernetes/admin.conf, which can be
changed and overwritten by the system.
Copy /etc/kubernetes/admin.conf to a private file under
/home/sysadmin such as /home/sysadmin/.kube/config, and update
/home/sysadmin/.profile to have the KUBECONFIG environment variable
point to the private file.
For example, the following commands set up a private KUBECONFIG file.
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% mkdir .kube
% cp /etc/kubernetes/admin.conf .kube/config
% echo "export KUBECONFIG=~/.kube/config" >> ~/.profile
% exit
Confirm that the KUBECONFIG environment variable is set correctly
and that kubectl commands are functioning properly.
# ssh sysadmin@<oamFloatingIpAddress>
Password:
% env | fgrep KUBE
KUBECONFIG=/home/sysadmin/.kube/config
% kubectl get pods
You can now access all Kubernetes CLI commands.
kubectl commands
Kubernetes commands are executed with the kubectl command
For example:
~(keystone_admin)]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller-0 Ready master 5d19h v1.13.5
~(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dashboard-kubernetes-dashboard-7749d97f95-bzp5w 1/1 Running 0 3d18h
Helm commands
Helm commands are executed with the helm command
For example:
% helm repo add bitnami https://charts.bitnami.com/bitnami
% helm repo update
% helm repo list
% helm search repo
% helm install wordpress bitnami/wordpress
For an LDAP Account¶
Use kubeconfig-setup to setup KUBECONFIG for local environment.
$ kubeconfig-setup
$ source ~/.profile
Use oidc-auth to authenticate via OIDC/LDAP.
$ oidc-auth
Using "joefulladmin" as username.
Password:
Successful authentication.
Updated /home/joefulladmin/.kube/config .
- Use
kubectlto test access to kubernetes commands / resources (admin and non-admin).
# Displaying anything in 'kube-system' namespace requires 'cluster-admin' privileges
$ kubectl -n kube-system get secrets
NAME TYPE DATA AGE
ceph-admin Opaque 2 3d8h
ceph-pool-kube-cephfs-data kubernetes.io/cephfs 4 3d8h
ceph-pool-kube-rbd kubernetes.io/rbd 2 3d8h
# Anyone can display resources in 'default' namespace
$ kubectl -n default get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d9h
Access Kubernetes CLIs and GUI Remotely¶
For details on how to setup and use remote access to StarlingX CLIs and GUI, see Remote Access.
Access Kubernetes REST APIs¶
Access the Kubernetes REST API with the URL prefix of https://<oam-floating-ip-address>:6443
and using the API syntax described at The Kubernetes API.