Examples of User Management Common Tasks¶
This section provides a set of common tasks related to the user management of both system administrations and general end users, to set up unique users for your system.
Configure OIDC/LDAP Authentication for Kubernetes User Authentication¶
After installing StarlingX, you should configure OIDC/LDAP authentication for kubernetes access user authentication.
OIDC/LDAP authentication can be supported by StarlingX’s local LDAP server and/or up to three remote LDAP servers (for example, Windows Active Directory).
In this example, OIDC/LDAP authentication is setup for local LDAP.
Prerequisites
You must have the credentials for the ‘sysadmin’ local Linux user account used for installation.
Procedure
Login to the active controller as the ‘sysadmin’ user.
Use either a local console or SSH.
Setup ‘sysadmin’ credentials.
$ source /etc/platform/openrc
Configure Kubernetes for OIDC token validation.
Use the default nodePort for the
oidc-auth-appssystem application of 30556.$ OAMIP=$(system oam-show | egrep "(oam_ip|oam_floating_ip)" | awk '{print $4}') $ system service-parameter-add kubernetes kube_apiserver oidc-client-id=stx-oidc-client-app $ system service-parameter-add kubernetes kube_apiserver oidc-groups-claim=groups $ system service-parameter-add kubernetes kube_apiserver oidc-issuer-url=https://${OAMIP}:30556/dex $ system service-parameter-add kubernetes kube_apiserver oidc-username-claim=email $ system service-parameter-apply kubernetesConfigure and apply the
oidc-auth-appssystem application.Create the certificate to be used by both the OIDC client and the OIDC identity provider.
$ mkdir /home/sysadmin/oidc $ OAMIP=$(system oam-show | egrep "(oam_ip|oam_floating_ip)" | awk '{print $4}') $ cat <<EOF > /home/sysadmin/oidc/oidc-auth-apps-certificate.yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: oidc-auth-apps-certificate namespace: kube-system spec: secretName: oidc-auth-apps-certificate duration: 2160h # 90 days renewBefore: 360h # 15 days issuerRef: name: system-local-ca kind: ClusterIssuer commonName: ${OAMIP} subject: organizations: - ABC-Company organizationalUnits: - StarlingX-system-oidc-auth-apps ipAddresses: - ${OAMIP} EOF $ kubectl apply -f/home/sysadmin/oidc/oidc-auth-apps-certificate.yamlConfigure the OIDC-client with the OIDC client certificate and OIDC identity server certificate (created in the
Create the certificate to be used by both the OIDC client and the OIDC identity providerstep) and the Trusted CA that you used to sign these certificates (i.e., the system-local-ca).$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/oidc/dex-ca-cert.crt $ kubectl create secret generic dex-ca-cert --from-file=/home/sysadmin/oidc/dex-ca-cert.crt -n kube-system $ cat <<EOF > /home/sysadmin/oidc/oidc-client-overrides.yaml tlsName: oidc-auth-apps-certificate config: # The |OIDC|-client container mounts the dex-ca-cert secret at /home, therefore # issuer_root_ca: /home/<filename-only-of-generic-secret> issuer_root_ca: /home/dex-ca-cert.crt issuer_root_ca_secret: dex-ca-cert # secret for accessing dex client_secret: stx-oidc-client-p@ssw0rd EOF $ system helm-override-update oidc-auth-apps oidc-client kube-system --values /home/sysadmin/oidc/oidc-client-overrides.yamlConfigure the secret observer to track renewals of certificates.
$ cat <<EOF > /home/sysadmin/oidc/secret-observer-overrides.yaml cronSchedule: "*/15 * * * *" observedSecrets: - secretName: "dex-ca-cert" filename: "dex-ca-cert.crt" deploymentToRestart: "stx-oidc-client" - secretName: "oidc-auth-apps-certificate" filename: "tls.crt" deploymentToRestart: "stx-oidc-client" - secretName: "oidc-auth-apps-certificate" filename: "tls.crt" deploymentToRestart: "oidc-dex" EOF $ system helm-override-update oidc-auth-apps secret-observer kube-system --values /home/sysadmin/oidc/secret-observer-overrides.yaml
Create a secret with the certificate of the CA that signed the certificate of local LDAP, i.e.,
system-local-ca, to be used in theSpecify the configuration for connecting to Local LDAP in the user overrides for the oidc-auth-apps applicationstep.$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/oidc/local-ldap-ca-cert.crt $ kubectl create secret generic local-ldap-ca-cert --from-file=/home/sysadmin/oidc/local-ldap-ca-cert.crt -n kube-systemSpecify the configuration for connecting to local LDAP in the user overrides for the
oidc-auth-appsapplication.$ OAMIP=$(system oam-show | egrep "(oam_ip|oam_floating_ip)" | awk '{print $4}') $ MGMTIP=$(system addrpool-list --nowrap | fgrep management | awk '{print $14}') $ BINDPW=$(keyring get ldap ldapadmin) $ cat <<EOF > /home/sysadmin/oidc/dex-overrides.yaml config: staticClients: - id: stx-oidc-client-app name: STX OIDC Client app redirectURIs: ['https://${OAMIP}:30555/callback'] secret: stx-oidc-client-p@ssw0rd expiry: idTokens: "10h" connectors: - type: ldap name: LocalLDAP id: localldap-1 config: host: ${MGMTIP}:636 rootCA: /etc/ssl/certs/adcert/local-ldap-ca-cert.crt insecureNoSSL: false insecureSkipVerify: false bindDN: CN=ldapadmin,DC=cgcs,DC=local bindPW: ${BINDPW} usernamePrompt: Username userSearch: baseDN: ou=People,dc=cgcs,dc=local filter: "(objectClass=posixAccount)" username: uid idAttr: DN emailAttr: uid nameAttr: gecos groupSearch: baseDN: ou=Group,dc=cgcs,dc=local filter: "(objectClass=posixGroup)" userMatchers: - userAttr: uid groupAttr: memberUid nameAttr: cn volumeMounts: - mountPath: /etc/ssl/certs/adcert name: certdir - mountPath: /etc/dex/tls name: https-tls volumes: - name: certdir secret: secretName: local-ldap-ca-cert - name: https-tls secret: defaultMode: 420 secretName: oidc-auth-apps-certificate EOF $ system helm-override-update oidc-auth-apps dex kube-system --values /home/sysadmin/oidc/dex-overrides.yamlApply the
oidc-auth-appssystem application.$ system application-apply oidc-auth-apps
Wait for the
oidc-auth-appssystem application to reach the ‘applied’ status.$ system application-list
Postrequisites
Create First System Administrator¶
After installing StarlingX, you should create your first unique system administrator account.
In this example, a local LDAP user account and a local Keystone user account are created for the first system administrator user.
The first system administrator user must have full ‘admin’ privileges such that it can create subsequent system administrators and end users.
Prerequisites
You must have the credentials for the ‘sysadmin’ local Linux user account used for the installation.
Procedure
Login to the active controller as the ‘sysadmin’ user.
Use either a local console or SSH.
Apply source credentials for the ‘admin’ keystone user.
$ source /etc/platform/openrc
Create a directory for temporary files for setting up users and groups.
$ mkdir /home/sysadmin/users
Create a new local LDAP group for system administrators with full privileges.
$ sudo ldapaddgroup Level1SystemAdmin
Add full Linux authorization privileges to the
Level1SystemAdminLDAP group members.Enable
pam_group.soin/etc/pam.d/common-auth, and update/etc/security/group.confwith LDAP group mappings.Note
If it is AIO-DX controller configuration, add full Linux authorization privileges on both controllers.
# Execute this line only once, on each host $ sudo sed -i '1i auth required pam_group.so use_first_pass' /etc/pam.d/common-auth # Execute this line for each LDAP group being mapped to 1 or more local Linux groups, on each host $ sudo sed -i '$ a\*;*;%Level1SystemAdmin;Al0000-2400;sys_protected,root,sudo' /etc/security/group.conf
Add full kubernetes authorization privileges to the
Level1SystemAdminLDAP group members.Add a kubernetes
ClusterRoleBindingto bind theLevel1SystemAdmingroup to thecluster-adminrole.$ cat << EOF > /home/sysadmin/users/Level1SystemAdmin-clusterrolebinding.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: Level1SystemAdmin subjects: - kind: Group name: Level1SystemAdmin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io EOF $ kubectl apply -f /home/sysadmin/users/Level1SystemAdmin-clusterrolebinding.yml
Create a new local LDAP user for the first system administrator.
$ sudo ldapusersetup -u joefulladmin Password: Successfully added user joefulladmin to LDAP Successfully set password for user joefulladmin Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=joefulladmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=joefulladmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days
Add the first system administrator to the
Level1SystemAdmingroup.$ sudo ldapaddusertogroup joefulladmin Level1SystemAdmin
Add a new local keystone user for the first system administrator using the same username.
Create a keystone user in the ‘admin’ project. The StarlingX infrastructure resources are all contained in the ‘admin’ project.
$ USERNAME="joefulladmin" $ USERPASSWORD="<password>" $ PROJECTNAME="admin" $ PROJECTID=$(openstack project list | grep "${PROJECTNAME}" | awk '{print $2}') $ openstack user create --password "${USERPASSWORD}" --project ${PROJECTID} "${USERNAME}" $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} memberAdd full StarlingX authorization privileges to the first system administrator’s keystone user account.
$ openstack role add --project ${PROJECTNAME} --user ${USERNAME} admin
Logout as ‘sysadmin’.
$ exit
Postrequisites
Login to the local console or SSH with this new first system administrator,
joefulladmin. See System Administrator - Test Local Access using SSH/Linux Shell and System and Kubernetes CLIContinue to Create Other System Administrators
System Administrator - Test Local Access using SSH/Linux Shell and System and Kubernetes CLI¶
After installing your first system administrator, with full privileges, test access to Linux, StarlingX and Kubernetes commands and resources.
Prerequisites
You must have created your first system administrator;
You need to perform this procedure using the first system administrator.
Procedure
Login to active controller as the first system administrator,
joefulladminin these examples.Use either local console or SSH.
Note
If this is the first time logging in with your Local LDAP account, the password configured is your username. You will be forced to update your password.
Test access to linux commands (admin and non-admin).
# Creating user requires sudo $ sudo ldapusersetup -u johnsmith Successfully added user johnsmith to LDAP Successfully set password for user johnsmith Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=johnsmith,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=johnsmith,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days # Listing IP interfaces does not require admin privileges $ ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:39:06:4e brd ff:ff:ff:ff:ff:ff 3: enp0s8: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 08:00:27:38:8b:7c brd ff:ff:ff:ff:ff:ff ...
Test access to Kubernetes commands / resources.
Use
kubeconfig-setupto setupKUBECONFIGfor local environment.$ kubeconfig-setup $ source ~/.profile
Use
oidc-authto authenticate via OIDC/LDAP.$ oidc-auth Using "joefulladmin" as username. Password: Successful authentication. Updated /home/joefulladmin/.kube/config .
Use
kubectlto test access to kubernetes commands / resources (admin and non-admin).# Displaying anything in 'kube-system' namespace requires 'cluster-admin' privileges $ kubectl -n kube-system get secrets NAME TYPE DATA AGE ceph-admin Opaque 2 3d8h ceph-pool-kube-cephfs-data kubernetes.io/cephfs 4 3d8h ceph-pool-kube-rbd kubernetes.io/rbd 2 3d8h # Anyone can display resources in 'default' namespace $ kubectl -n default get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d9h
Test access to StarlingX commands / resources.
Use
local_starlingxrcto setup StarlingX environment variables and to setup your keystone user’s authentication credentials.$ source local_starlingxrc Enter the password to be used with Keystone user joefulladmin: Created file /home/joefulladmin/joefulladmin-openrc
Test keystone commands (admin and non-admin).
# Making changes to the system requires 'admin' role $ system modify -l Ottawa +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | contact | None | | created_at | 2024-07-12T10:52:40.609006+00:00 | | description | None | | https_enabled | True | | latitude | None | | location | Ottawa | | longitude | None | ... # Any member of 'admin' project can display system parameters $ system host-if-list controller-0 +--------------------------------------+--------+----------+----------+---------+------------+----------+-------------+------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | +--------------------------------------+--------+----------+----------+---------+------------+----------+-------------+------------+ | 287eca5a-8721-4422-b73a-bf24805eac4c | enp0s3 | platform | ethernet | None | ['enp0s3'] | [] | [] | MTU=1500 | | 325c32b9-fe40-4900-a0ff-59062190ce80 | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | +--------------------------------------+--------+----------+----------+---------+------------+----------+-------------+------------+
Postrequisites
Continue to Create other System Administrators.
Create Other System Administrators¶
After setting up your first system administrator, use this first system administrator to configure other system administrators.
In the following example, creating other system administrators consists of:
Create system administrator groups with different privilege levels.
The
Level1SystemAdmingroup with full privileges (including sudo) has already been created, when creating the first system administrator.Create a
Level2SystemAdmingroup with full privileges, with no linuxsudocapability.Create a
Level3SystemAdmingroup with read-only privileges.
Create one or more new system administrator users in each of the above groups.
For each user, create both:
a local LDAP user account.
a keystone user account.
Prerequisites
You need to use the first system administrator created to execute this procedure.
Procedure
Login to the active controller as the first system administrator,
joefulladminin this example.Use either a local console or SSH.
Use the
local_starlingxrcto setup StarlingX environment variables and to setup the keystone user’s authentication credentials.$ source local_starlingxrc Enter the password to be used with keystone user joefulladmin: Created file /home/joefulladmin/joefulladmin-openrc
Use the
oidc-authto authenticate via OIDC/LDAP for kubernetes CLI.$ oidc-auth Using "joefulladmin" as username. Password: Successful authentication. Updated /home/joefulladmin/.kube/config .
Set up additional system admin groups with different privileges.
Create a directory for temporary files for setting up users and groups.
$ mkdir /home/joefulladmin/users
Create a new local LDAP group with full privilege (but without linux
sudocapability) for the system administrator.$ sudo ldapaddgroup Level2SystemAdmin
Add full Linux authorization privileges (but without linux ‘sudo’ capability) to the
Level2SystemAdminLDAP group members.Update the
/etc/security/group.confwith LDAP group mappings.Note
For a AIO-DX controller configuration, this step must be done on both controllers.
$ sudo sed -i '$ a\*;*;%Level2SystemAdmin;Al0000-2400;sys_protected,root' /etc/security/group.conf
Add restricted kubernetes authorization privileges to the
Level2SystemAdminLDAP group members.Add a kubernetes
ClusterRoleand kubernetesClusterRoleBindingto bind theLevel2SystemAdmingroup to a more restricted set of kubernetes capabilities.$ cat << EOF > /home/joefulladmin/users/Level2SystemAdmin-clusterrolebinding.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: Level2SystemAdmin rules: # For the core API group (""), allow full access to all resource types # EXCEPT for serviceaccounts and resource policies (limitranges and resourcequotas) only allow read access - apiGroups: [""] resources: ["bindings", "configmaps", "endpoints", "events", "persistentvolumeclaims", "pods", "podtemplates", "replicationcontrollers", "secrets", "services"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: [ "serviceaccounts", "limitranges", "resourcequotas" ] verbs: ["get", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: Level2SystemAdmin subjects: - kind: Group name: Level2SystemAdmin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: Level2SystemAdmin apiGroup: rbac.authorization.k8s.io EOF $ kubectl apply -f /home/joefulladmin/users/Level2SystemAdmin-clusterrolebinding.yml‘admin’ StarlingX Authorization Privileges will be given to the
Level2SystemAdminLDAP Group members, when they are created in a subsequent step.
Create a new local LDAP group for read-only system administrators.
$ sudo ldapaddgroup Level3SystemAdmin
Do not add additional linux authorization privileges to the
Level3SystemAdminLDAP group members.Update
/etc/security/group.confwith LDAP group mappings.Note
For a AIO-DX controller configuration, this step must be done on both controllers.
$ sudo sed -i '$ a\*;*;%Level3SystemAdmin;Al0000-2400;users' /etc/security/group.conf
Add ‘reader’ Kubernetes authorization privileges to the
Level3SystemAdminLDAP group members.Add a kubernetes
ClusterRoleand kubernetesClusterRoleBindingto bind theLevel3SystemAdmingroup to a reader only set of kubernetes capabilities.$ cat << EOF > /home/joefulladmin/users/Level3SystemAdmin-clusterrolebinding.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: Level3SystemAdmin rules: - apiGroups: [""] # "" indicates the core API group resources: ["*"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: Level3SystemAdmin subjects: - kind: Group name: Level3SystemAdmin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: Level3SystemAdmin apiGroup: rbac.authorization.k8s.io EOF $ kubectl apply -f /home/joefulladmin/users/Level3SystemAdmin-clusterrolebinding.yml
The ‘reader’ StarlingX authorization privileges will be given to the
Level3SystemAdminLDAP group members, when they are created in a subsequent step.
Create system ‘admin’ users in each of the 3 system admin groups.
Create one or more users in the
Level1SystemAdmingroup and give each a keystone user account with an ‘admin’ role.$ sudo ldapusersetup -u davefulladmin Password: Successfully added user davefulladmin to LDAP Successfully set password for user davefulladmin Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=davefulladmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=davefulladmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days $ sudo ldapaddusertogroup davefulladmin Level1SystemAdmin $ USERNAME="davefulladmin" $ USERPASSWORD="<password>" $ PROJECTNAME="admin" $ PROJECTID=`openstack project list | grep ${PROJECTNAME} | awk '{print $2}'` $ openstack user create --password "${USERPASSWORD}" --project ${PROJECTID} "${USERNAME}" $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} _member_ $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} adminRepeat this step for additional users required in the
Level1SystemAdmingroup.Create one or more users in the
Level2SystemAdmingroup and give each a keystone user account with an ‘admin’ role.$ sudo ldapusersetup -u jimbasicadmin Password: Successfully added user jimbasicadmin to LDAP Successfully set password for user jimbasicadmin Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=jimbasicadmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=jimbasicadmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days $ sudo ldapaddusertogroup jimbasicadmin Level2SystemAdmin $ USERNAME="jimbasicadmin" $ USERPASSWORD="<password>" $ PROJECTNAME="admin" $ PROJECTID=`openstack project list | grep ${PROJECTNAME} | awk '{print $2}'` $ openstack user create --password "${USERPASSWORD}" --project ${PROJECTID} "${USERNAME}" $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} _member_ $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} adminRepeat this step for additional users required in the
Level2SystemAdmingroup.Create one or more users in the
Level3SystemAdmingroup and give each a keystone user account with ‘reader’ role.$ sudo ldapusersetup -u billreaderadmin Password: Successfully added user billreaderadmin to LDAP Successfully set password for user billreaderadmin Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=billreaderadmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=billreaderadmin,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days $ sudo ldapaddusertogroup billreaderadmin Level3SystemAdmin $ USERNAME="billreaderadmin" $ USERPASSWORD="<password>" $ PROJECTNAME="admin" $ PROJECTID=`openstack project list | grep ${PROJECTNAME} | awk '{print $2}'` $ openstack user create --password "${USERPASSWORD}" --project ${PROJECTID} "${USERNAME}" $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} _member_ $ openstack role add --project ${PROJECTNAME} --user ${USERNAME} readerRepeat this step for additional users required in the
Level3SystemAdmingroup.
Postrequisites
Each system administrator user created should now be able to:
SSH to the system
execute linux commands based on their linux authorizations.
execute StarlingX CLI commands based on their StarlingX authorizations.
execute kubernetes CLI commands based on their kubernetes RBAC role bindings.
See section: System Administrator - Test Local Access using SSH/Linux Shell and System and Kubernetes CLI.
Create End Users¶
After setting up your system administrators, use a Level1SystemAdmin
system administrator to configure ‘end users’.
In the following example, creating end users consists of:
Create a general end user kubernetes
ClusterRolewith restricted kubernetes capabilities.For one or more specific end user groups:
Create an LDAP group;
You can disable the SSH access for this LDAP group (i.e., restricting these end users to only use remote CLIs / APIs / GUIs);
Create a Kubernetes namespace for the group;
Bind the general end user kubernetes cluster role to the LDAP group for this kubernetes namespace,
Create one or more LDAP users in this LDAP group.
Prerequisites
You should already have created a system administrator.
You need to perform this procedure using the
Level1SystemAdminsystem administrator.
Procedure
Login to the active controller as a
Level1SystemAdminsystem administrator,joefulladminin this example.Use either a local console or SSH.
Use the
local_starlingxrcto setup StarlingX environment variables and to setup your keystone user’s authentication credentials.$ source local_starlingxrc Enter the password to be used with keystone user joefulladmin: Created file /home/joefulladmin/joefulladmin-openrc
Use the
oidc-authto authenticate via OIDC/LDAP for kubernetes CLI.$ oidc-auth Using "joefulladmin" as username. Password: Successful authentication. Updated /home/joefulladmin/.kube/config.
Create a directory for temporary files for setting up users and groups.
$ mkdir /home/joefulladmin/users
Create a general end user kubernetes
ClusterRolewith restricted kubernetes authorization privileges.$ cat << EOF > /home/joefulladmin/users/GeneralEndUser-ClusterRole.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: GeneralEndUser rules: # For the core API group (""), allow full access to all resource types # EXCEPT for resource policies (limitranges and resourcequotas) only allow read access - apiGroups: [""] resources: ["bindings", "configmaps", "endpoints", "events", "persistentvolumeclaims", "pods", "podtemplates", "replicationcontrollers", "secrets", "serviceaccounts", "services"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: [ "limitranges", "resourcequotas" ] verbs: ["get", "list"] # Allow full access to all resource types of the following explicit list of apiGroups. # Notable exceptions here are: # ApiGroup ResourceTypes # ------- ------------- # policy podsecuritypolicies, poddisruptionbudgets # networking.k8s.io networkpolicies # admissionregistration.k8s.io mutatingwebhookconfigurations, validatingwebhookconfigurations # - apiGroups: ["apps", "batch", "extensions", "autoscaling", "apiextensions.k8s.io", "rbac.authorization.k8s.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # Cert Manager API access - apiGroups: ["cert-manager.io", "acme.cert-manager.io"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] EOF $ kubectl apply -f /home/joefulladmin/users/GeneralEndUser-ClusterRole.ymlFor one or more specific end user groups, create an LDAP group, Kubernetes namespace and one or more LDAP users.
Create a new local LDAP group for the end user group.
$ sudo ldapaddgroup ABC-EndUsers
Optional step: Disallow this local LDAP group from using SSH.
Users of this LDAP group can only use the remote kubernetes API/CLI/GUI. Update the
/etc/security/group.confwith LDAP group mappings.Note
If it is AIO-DX controller configuration, disallow this local LDAP group on both controllers.
$ sudo sed -i '$ a\*;*;%ABC-EndUsers;Al0000-2400;denyssh' /etc/security/group.conf
Create a kubernetes namespace for the end user group.
$ kubectl create namespace abc-ns
Bind the
GeneralEndUserrole to this LDAP group for this kubernetes namespace.$ cat << EOF > /home/joefulladmin/users/ABC-EndUsers-rolebinding.yml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ABC-EndUsers namespace: abc-ns subjects: - kind: Group name: ABC-EndUsers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: GeneralEndUser apiGroup: rbac.authorization.k8s.io EOF $ kubectl apply -f /home/joefulladmin/users/ABC-EndUsers-rolebinding.yml
Create one or more LDAP users for the end user group.
$ sudo ldapusersetup -u steveenduser Password: Successfully added user steveenduser to LDAP Successfully set password for user steveenduser Warning : password is reset, user will be asked to change password at login Successfully modified user entry uid=steveenduser,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 90 days Successfully modified user entry uid=steveenduser,ou=People,dc=cgcs,dc=local in LDAP Updating password expiry to 2 days $ sudo ldapaddusertogroup steveenduser ABC-EndUsers
Repeat the
Create one or more LDAP users for the end user groupstep for the next user in this end user group.
Repeat the
For one or more specific end user groups, create an LDAP group, kubernetes namespace and one or more LDAP usersstep for the next end user group.
Postrequisites
The end user created is able to, optionally, use SSH on the system to execute kubernetes CLI commands to manage the hosted containerized application and execute Linux commands. See section: End Users - Test Local Access using SSH or Kubernetes CLI.
Note
More setup is required for end user to use remote CLIs/GUIs, see section Remote Access.
End Users - Test Local Access using SSH or Kubernetes CLI¶
After creating end users, test their access to the the Kubernetes commands / resources and linux access.
Prerequisites
You should already have created at least one end user.
You need to perform this procedure using an end user.
Procedure
Login to the active controller as an end user,
steveenduserin this example.Use either a local console or SSH.
Test access to linux commands (admin and non-admin) using the following commands:
# Creating user requires sudo $ sudo ldapusersetup -u johnsmith steveenduser is not allowed to run sudo on controller-0. This incident will be reported. # Listing IP interfaces does not require admin privileges $ ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:39:06:4e brd ff:ff:ff:ff:ff:ff 3: enp0s8: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 08:00:27:38:8b:7c brd ff:ff:ff:ff:ff:ff ...
Test access to kubernetes commands / resources using the following steps:
Use
kubeconfig-setupto setupKUBECONFIGfor local environment.$ kubeconfig-setup $ source ~/.profile
Use
oidc-authto authenticate through OIDC/LDAP.$ oidc-auth Using "steveenduser" as username. Password: Successful authentication. Updated /home/johnsmith/.kube/config .
Use
kubectlto test access to kubernetes commands / resources (admin and non-admin).# Displaying anything in 'kube-system' namespace requires 'cluster-admin' privileges $ kubectl -n kube-system get secrets Error from server (Forbidden): secrets is forbidden: User "steveenduser" cannot list resource "secrets" in API group "" in the namespace "kube-system" # Should be able to display resources in his own namespace, 'ABC-ns' $ kubectl -n abc-ns get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d9h
Test access to StarlingX commands / resources.
$ source local_starlingxrc Enter the password to be used with Keystone user steveenduser: Created file /home/johnsmith/steveenduser-openrc $ system host-list Must provide Keystone credentials or user-defined endpoint and token, error was: The request you have made requires authentication. (HTTP 401) (Request-ID: req-0feb292f-d649-4d9f-8e60-f63643265207)
Postrequisites
Setup remote access for any end users requiring remote access. See Remote Access.
Remote Access¶
This section provides a procedure for a system administrator to collect system and user information required for a user to connect remotely to StarlingX. It also provides procedures for system administrators and end users to remotely connect to StarlingX CLIs, kubernetes CLIs and GUIs.
System Administrator - Collect System Information for Remote User Access¶
This procedure collects up a variety of data requried for a user to remotely interface with StarlingX system.
The following data needs to be collected:
The public certificate of the Root CA that signed the certificates of the StarlingX system.
the remote user needs to update the remote system to trust this certificate.
Kubernetes environment data for the StarlingX system.
StarlingX environment data for the StarlingX system.
Procedure
Login to the active controller as a
Level1SystemAdminsystem administrator,joefulladminin this example.Use either a local console or SSH.
Use
local_starlingxrcto setup StarlingX environment variables and to setup your keystone user’s authentication credentials.$ source local_starlingxrc Enter the password to be used with Keystone user joefulladmin: Created file /home/joefulladmin/joefulladmin-openrc
Use
kubeconfig-setupto setupKUBECONFIGfor local environment and useoidc-authto setup OIDC/LDAP authentication credentials.$ kubeconfig-setup $ source ~/.profile $ oidc-auth Using "joefulladmin" as username. Password: Successful authentication. Updated /home/joefulladmin/.kube/config .
Create a directory for storing information for remote users.
$ mkdir ~/stx-remote-access-info
Get public certificate of the Root CA that signed the certificates of the StarlingX system.
$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > ~/stx-remote-access-info/stx.ca.crtGet the Kubernetes environment data for the StarlingX system.
$ OAMIP=$(system oam-show | egrep "(oam_ip|oam_floating_ip)" | awk '{print $4}') $ touch ~/stx-remote-access-info/kubeconfig $ kubectl config --kubeconfig ~/stx-remote-access-info/kubeconfig set-cluster stx-cluster --server=https://${OAMIP}:6443 --embed-certs --certificate-authority=~/stx-remote-access-info/stx.ca.crt $ kubectl config --kubeconfig ~/stx-remote-access-info/kubeconfig set-context YOURUSERNAMEHERE@stx-cluster --cluster=stx-cluster --user YOURUSERNAMEHERE $ kubectl config --kubeconfig ~/stx-remote-access-info/kubeconfig use-context YOURUSERNAMEHERE@stx-clusterGet the StarlingX environment data for the StarlingX system.
$ OAMIP=$(system oam-show | egrep "(oam_ip|oam_floating_ip)" | awk '{print $4}') $ PROJECTNAME="admin" $ PROJECTID=`openstack project list | grep ${PROJECTNAME} | awk '{print $2}'` $ cat <<EOF > ~/stx-remote-access-info/starlingxrc #!/usr/bin/env bash # export OS_AUTH_URL=https://${OAMIP}:5000/v3 export OS_PROJECT_ID=${PROJECTID} export OS_PROJECT_NAME=${PROJECTNAME} export OS_USER_DOMAIN_NAME="Default" export OS_PROJECT_DOMAIN_ID="default" export OS_PROJECT_DOMAIN_NAME="" export OS_USERNAME=YOURUSERNAMEHERE echo "Please enter your OpenStack Password for project \$OS_PROJECT_NAME as user \$OS_USERNAME: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=\$OS_PASSWORD_INPUT export OS_REGION_NAME=${OS_REGION_NAME} export OS_INTERFACE=public export OS_IDENTITY_API_VERSION=3 export OS_CACERT=./stx-remote-access-info/stx.ca.crt EOFPackage up the following files for a remote user to use when setting up his remote access on his system.
$ cd ~ $ tar cvf stx-remote-access-info.tar ./stx-remote-access-info
Postrequisites
For any user requiring remote access:
securely send them the stx-remote-access-info.tar file.
have them follow the procedures for setting up remote access. See Remote Access.
System Administrator - Access Horizon GUI¶
Access the StarlingX Horizon GUI using your browser.
This procedure should be performed on your system that has a web browser.
Prerequisites
A system with a web browser.
You need to have the stx-remote-access.tar file from your system administrator, containing system information related to your StarlingX system.
Procedure
Update your web browser to ‘trust’ the StarlingX CA certificate.
Extract the files from the stx-remote-access-info.tar file from your StarlingX system administrator.
$ cd ~ $ tar xvf ./stx-remote-access-info.tar # The StarlingX CA Certificate is here: $ ls ./stx-remote-access-info/stx.ca.crt
Follow your web browser’s instructions to add ~/stx-remote-access-info/stx.ca.crt to the list of trusted CAs for your browser.
Open your web browser at address
https://<OAM-Floating-IP-Address>:8443Login with your keystone account’s ‘username’ and ‘password’.
System Administrator - Configure System Remote CLI & Kubernetes Remote CLI¶
Configure the StarlingX remote CLI and kubernetes remote CLI on your Linux-based system so that you can remotely access your StarlingX system through remote CLI commands.
This procedure should be performed on your Linux-based system.
Prerequisites
You need to have a Linux-based system with python installed and either with Docker installed or ‘sudo’ capability to install Docker.
You need to have the stx-remote-access.tar file from your system administrator, containing system information related to your StarlingX system.
Procedure
Install Docker on your Linux-based system. The following example is for ubuntu.
# Add Docker's official GPG key: $ sudo apt-get update $ sudo apt-get install ca-certificates curl $ sudo install -m 0755 -d /etc/apt/keyrings $ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc $ sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $ sudo apt-get update # Install Docker Packages $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Verify that the Docker Engine installation is successful by running the hello-world image. $ sudo docker run hello-world # Manage docker as non-root user $ sudo groupadd docker $ sudo usermod -aG docker $USER $ sudo reboot
Download and extract the StarlingX remote CLI tar file from the StarlingX site.
$ cd ~ $ wget https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/remote-cli/stx-remote-cli-1.0.tgz $ tar xvf stx-remote-cli-1.0.tgz
Extract the StarlingX system information from the stx-remote-access-info.tar file from your StarlingX system administrator.
# Files from your StarlingX System Administrator $ ls ~/stx-remote-access-info.tar $ cd ~/remote_cli $ tar xvf ~/stx-remote-access-info.tar
Update the
starlingxrcfile.$ vi ~/remote_cli/stx-remote-access-info/starlingxrc // and change YOURUSERNAMEHERE to your StarlingX LDAP Username, everywhere in file
Update the
KUBECONFIGfile.$ vi ~/remote_cli/stx-remote-access-info/kubeconfig // and change YOURUSERNAMEHERE to your StarlingX LDAP Username, everywhere in file
Configure the containerized remote CLI clients.
$ ./configure_client.sh -t platform -r ${HOME}/remote_cli/stx-remote-access-info/starlingxrc -k ${HOME}/remote_cli/stx-remote-access-info/kubeconfig -w ${HOME}/remote_cli -p docker.io/starlingx/stx-platformclients:stx.9.0-v1.5.13
Postrequisites
Access the StarlingX remote CLI and kubernetes remote CLI.
See System Administrator - Access System Remote CLI & Kubernetes Remote CLI
System Administrator - Access System Remote CLI & Kubernetes Remote CLI¶
Access your StarlingX system through the StarlingX remote CLI and kubernetes remote CLI on your Linux-based system.
Prerequisites
You need to have a Linux-based system that has configured the StarlingX remote CLI and kubernetes remote CLI. See section: System Administrator - Configure System Remote CLI & Kubernetes Remote CLI.
Procedure
Source the remote client for the StarlingX platform.
$ cd ~/remote_cli $ source ./remote_client_platform.sh
Test the StarlingX remote CLI commands.
$ cd ~/remote_cli $ system host-list $ fm alarm-list
Test kubernetes remote CLI commands.
$ cd ~/remote_cli $ oidc-auth -u <LDAP-USERNAME> -p <LDAP-PASSWORD> -c <OAM-FLOATING-IP> $ kubectl get all
End User - Configure Kubernetes Remote CLI¶
Configure the kubernetes remote CLI on your Linux-based system to enable access to the StarlingX system kubernetes remote CLI commands.
This procedure should be performed on your Linux-based system.
Prerequisites
You need to have a Linux-based system with python installed.
You need to have the stx-remote-access.tar file from your system administrator, containing system information related to your StarlingX system.
Procedure
Install the
kubectlclient CLI on the host.Follow the instructions on Install and Set Up kubectl on Linux <https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>.
The example below can be used for Ubuntu.
$ sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" $ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Download the
oidc-authshell script from StarlingX site and install python mechanize.$ wget`https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/remote-cli/ <https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/remote-cli/stx-remote-cli-1.0.tgz>`__\ oidc-auth $ chmod a+rx oidc-auth $ sudo apt install python3-pip $ sudo pip install mechanize
Extract the StarlingX system information from the stx-remote-access-info.tar file from your StarlingX system administrator.
# Files from your StarlingX System Administrator $ ls ~/stx-remote-access-info.tar $ tar xvf ~/stx-remote-access-info.tar
Update the
KUBECONFIGfile.$ mkdir ~/.kube $ cp ~/stx-remote-access-info/kubeconfig ~/.kube/config $ vi ~/.kube/config // and change YOURUSERNAMEHERE to your StarlingX LDAP Username, everywhere in file # Add ~/stx-remote-access-info/stx.ca.crt to the list of Trusted CA # e.g. commands shown for ubuntu below $ sudo cp ~/stx-remote-access-info/stx.ca.crt /usr/local/share/ca-certificates $ sudo update-ca-certificates # Authenticate with OIDC/LDAP on StarlingX ... and token will be put in ~/.kube/config $ ./oidc-auth -u <StarlingX-LDAP-Username> -c <OAM-FLOATING-IP>
Postrequisites
Access the kubernetes remote CLI.
End User - Access Kubernetes Remote CLI¶
Access your StarlingX system through kubernetes remote CLI on your Linux-based system.
Prerequisites
You need to have a Linux-based system that has configured the Kubernetes remote CLI. See section: End User - Configure Kubernetes Remote CLI.
Procedure
Update your OIDC token in your
KUBECONFIG.$ ./oidc-auth -u <StarlingX-LDAP-Username> -c <OAM-FLOATING-IP>
Test kubernetes remote CLI commands.
$ kubectl get all
Install the Kubernetes Dashboard¶
You can optionally use the Kubernetes Dashboard web interface to perform cluster management tasks.
About this task
Kubernetes Dashboard allows you to perform common cluster management tasks such as deployment, resource allocation, real-time and historic status review, and troubleshooting.
Prerequisites
You must have cluster-admin RBAC privileges to install Kubernetes Dashboard.
Procedure
Create a namespace for the Kubernetes Dashboard.
~(keystone_admin)]$ kubectl create namespace kubernetes-dashboard
Create a certificate for use by the Kubernetes Dashboard.
Note
This example uses a self-signed certificate. In a production deployment, the use of a using a certificate signed by a trusted Certificate Authority is strongly recommended.
Create a location to store the certificate.
~(keystone_admin)]$ cd /home/sysadmin ~(keystone_admin)]$ mkdir -p /home/sysadmin/kube/dashboard/certs
Create the certificate.
~(keystone_admin)]$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /home/sysadmin/kube/dashboard/certs/dashboard.key -out /home/sysadmin/kube/dashboard/certs/dashboard.crt -subj "/CN=<FQDN>"
where: <FQDN>
The fully qualified domain name for the StarlingX cluster’s OAM floating IP.
Create a kubernetes secret for holding the certificate and private key.
~(keystone)admin)]$ kubectl -n kubernetes-dashboard create secret generic kubernetes-dashboard-certs --from-file=tls.crt=/home/sysadmin/kube/dashboard/certs/dashboard.crt --from-file=tls.key=/home/sysadmin/kube/dashboard/certs/dashboard.key
Configure the kubernetes-dashboard manifest:
Download the recommended.yaml file.
~(keystone_admin)]$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
Edit the file.
Comment out the auto-generate-certificates argument and add the tls-cert-file and tls-key-file arguments.
The updates should look like:
... args: # - --auto-generate-certificates - --namespace=kubernetes-dashboard - --tls-cert-file=/tls.crt - --tls-key-file=/tls.key ...
Apply the kubernetes dashboard recommended.yaml manifest.
~(keystone_admin)]$ kubectl apply -f recommended.yaml
Patch the kubernetes dashboard service to type=NodePort and port=32000.
~(keystone_admin)]$ kubectl patch service kubernetes-dashboard -n kubernetes-dashboard -p '{"spec":{"type":"NodePort","ports":[{"port":443, "nodePort":32000}]}}'Test the Kubernetes Dashboard deployment.
The Kubernetes Dashboard is listening at port 32000 on the machine defined above for StarlingX cluster’s OAM floating IP.
Access the dashboard at https://<fqdn>:32000
Because the certificate created earlier in this procedure was not signed by a trusted CA, you will need to acknowledge an insecure connection from the browser.
Select the Kubeconfig option for signing in to the Kubernetes Dashboard. Note that typically your kubeconfig file on a remote host is located at $HOME/.kube/config . You may have to copy it to somewhere more accessible.
You are presented with the Kubernetes Dashboard for the current context (cluster, user and credentials) specified in the kubeconfig file.