Creating Hidden Super-Users in OpenShift
About Kubernetes Authentication
Kubernetes provides a number of authentication strategies, such as the use of OAuth2.0 integration, ServiceAccount tokens and client X509 certificates through the use of a plugin based architecture.
The available authentication strategies for a cluster are controlled by the Kubenetes APIServer configuration and can be set through via the APIServerConfig file or passed in via startup parameters.
In OpenShift, this file is written to the Master Nodes under kube-apiserver-pod-XX
directories under /etc/kubernetes
for use by the Kubernetes APIServer static pods. It can also be reviewed as a ConfigMap in the openshift-kube-apiserver
Namespace as follows:
~|⇒ oc debug node/master-XX -- ls /host/etc/kubernetes/static-pod-resources/ | grep apiserver
To use host binaries, run `chroot /host`
kube-apiserver-certs
kube-apiserver-pod-11
...
~|⇒ oc get cm -n openshift-kube-apiserver -o jsonpath="{.data.config\.yaml}" config | jq | head
{
"admission": {
"pluginConfig": {
"network.openshift.io/ExternalIPRanger": {
"configuration": {
"allowIngressIP": false,
"apiVersion": "network.openshift.io/v1",
"externalIPNetworkCIDRs": null,
"kind": "ExternalIPRangerAdmissionConfig"
},
...
For more information about the available configurations for the Kubernetes APIServer, I have been unable to locate details in the Kubernetes Documentation but the best reference is the ControlPlane Go Struc Definitions.
There is also an example available in the Cluster KubeAPIServer Operation repository.
By default, OpenShift has an self-hosted OAuth2.0 service, provides a default kubeadmin
account for bootstraping and also supports using x509 client certifciates for authentication. This is what we will use to create the hidden users that will have escalated privileges within the cluster.
(Learn more about the kubeadmin credentials)
Note:
These created user will not have associated User
or Identity
objects present within the OpenShift cluster but requests received by these accounts will not be hidden and can still be seen by the KubeAPIServer Auditing functionality.
If audit logging is enabled on the OpenShift cluster the requests will still be recorded but their origin will be difficult to resolve.
Another advantage or disadvantage to these accounts is that once created, the credentials cannot be revoked by the KubeAPIServer. Please ensure that any credentials that are created have expiry dates configured and are stored in a secure environment. You have been WARNED.
X509 Authentication and RBAC
The process for creating user-account using the client x509 certificate method is as follows:
- A Certificate Signing Request (CSR) is created with the Common Name (CN) and Organisation (O) fields set
- A delegated Certificate Authority (CA) for the Kubernetes API is used to sign the CSR and provider a Certificate (CRT)
- The CRT is sent with requests to the Kube APIServer and authentication is performed based on the CRT’s signature
To extend on this, when a Certificate is signed by the cluster root CA or a delegated and trusted CA by the cluster, the APIServer will recognise this signature as ‘pre-trusted’ and trust the contents of the Certificate to be used for authorization.
This provides the ability for Kubernetes to delegate user-creation to external services, such as AWS Key Management Services (KMS) or Kerberos and trust the resulting signed certificates. The APIServer does not know which Certificates to expect so all with valid signatures are approves and currently there is no mechanism, such as a Certificate Revocation List (CRL) that can be used for removing access. Once a certificate has been created, it must be deleted or guarded.
Once a request has been authenticated, it will be passed through the authorization pipeline (RBAC) to challenge whether the requesting user is permitted to perform the requested actions. (assuming the cluster has RBAC enabled).
The username and group associations for the request are extracted from the provided certificates Common Name
and Organization
fields respectively.
Now the user and groups associations has been located, the associated RoleBindings (RB) and ClusterRoleBindings (CRB) are challenges until the request is approve for the action, or eventually dropped due to missing the authorizing Role.
Inspecting the OpenShift Default Credentials
The system:admin
user is created upon installation of a new OpenShift cluster and provide via the generated kubeconfig
file.
We can see from extracting the certificate from the kubeconfig
file and reviewing its contents via OpenSSL, the original certificate signing request (CSR) is created with the O=system:master
and CN=system:admin
values. This is then signed as below:
Issuer: OU=openshift, CN=XXXX
Validity
Not Before: ...
Not After : ...
Subject: O=system:masters, CN=system:admin
In the snippet above, we can see the Organization (O) and Common Name (CN) fields set for the Subject of the certificate.
- Common Name:
system:admin
- Organisation:
system:masters
As groups are associated to the certificate at creation time, it can be hard to track permissions that have been provided within the cluster.
Notes on Security
A common way to minimise the risk of lost credentials is to create user accounts within the cluster and associate them with Roles/ClusterRoles that are very limited in their ability. It is difficult to take this approach with users as above due to the group association being included in the certificate.
To revoke the permissions from the certificate, the permissions will need to be removed from the system:masters
group and this will create a number of issues with the OpenShift Platform.
If the certificate is lost, there is no recommended approach to close the security vulnerability without replacing the certificate authority and renewing all of the expected certificates. This outside the scope of this blog post.
For the system:admin
user, the permissions are associated with the group name directly using ClusterRoleBindings (CRB). It is worth noting that there are no groups.user.openshift.io
object matching the CRB but the associated permissions will still be enforced for the request when it is processed by the API.
This can be seen below:
auth|⇒ oc get groups -A
No resources found
auth|⇒ oc get clusterroles | grep master
system:master XXXX-XX-XXT23:45:07Z
auth|⇒ oc get clusterrolebindings | grep master
system:masters ClusterRole/system:master 6h15m
auth|⇒ oc get clusterrolebindings system:masters -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: system:masters
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:master
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
Creating new Super-User Credentials:
Now is where we learn to create out own cluster administrator clusters.
- First, we must log in with an account that has
cluster-admin
access to the cluster to collect a delegated certificate authority. If you already have a trusted CA, this step can be skipped.
|⇒ mkdir auth && cd auth
auth|⇒ oc extract -n openshift-kube-apiserver-operator secret/node-system-admin-signer
tls.crt
tls.key
auth|⇒ ls
tls.crt tls.key
- Creating certificate signing requests with any username and the group contains
system:masters
auth|⇒ openssl \
req \
-nodes \
-newkey rsa:2048 \
-keyout anyuser.key \
-out anyuser.csr \
-subj "/C=US/O=system:masters/CN=anyuser"
....+............+.........+......+...............+.+.....+.......+......+.........+..+.+.....+....+.....+...+.......+......+..+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+..+...+....+.....+.+.
.......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.....+..+.............+..+............+....+..+.........+...+...+....+...............+......+.....+.......+..+.........+.+.....+.+...............+..
......+....+..+.+...+.....+....+..+.+..............+..........+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.............+.+...+.....+.+...+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*........+........+...+.+.........+.........+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+........
.......+........+...+....+......+.....+..........+..+...+....+..+.+...............+...+.....+....+..+...+.......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Sign the CSR to create a signed certificate
auth|⇒ openssl x509 -req -in anyuser.csr -CA ./tls.crt -CAkey ./tls.key -out anyuser.crt
auth|⇒ ls
anyuser.crt anyuser.csr anyuser.key tls.crt tls.key
- Test for success by using the credentials
auth|⇒ export SERVER=https://api.XXXX:6443
auth|⇒ oc whoami --client-certificate ./anyuser.crt --client-key ./anyuser.key --server $SERVER
anyuser
auth|⇒ oc auth can-i patch secrets -n kube-system --client-certificate ./anyuser.crt --client-key ./anyuser.key --server $SERVER
yes
NOTE
If you are using a self-signed certificate for the API Server, you will need to get a copy of the API’s signing CA and use this to validate the server. --certificate-authority ./apiserver.ca
If this is not important to you, the --insecure-skip-tls-verify
flag can be used to bypass this validation.
Creating a Kubeconfig
The certificate has been shown to provide access to the cluster but it is cumbersome to require providing arguments for each request.
We can replace the arguments by storing the values in the Kubeconfig file and referencing this by the KUBECONFIG
environment variable to ensure that kubectl
/oc
use these new values.
This can be achieved using the commands below:
auth|⇒ export KUBECONFIG=$(pwd)/new-kubeconf
auth|⇒ k config set-cluster custom-cert-cluster --server $SERVER --certificate-authority ./apiserver.ca
Cluster "custom-cert-cluster" set.
auth|⇒ k config set-credentials custom-cert-user --client-certificate ./anyuser.crt --client-key ./anyuser.key
User "custom-cert-user" set.
auth|⇒ k config set-context custom-cert-context --user custom-cert-user --cluster custom-cert-cluster
Context "custom-cert-context" created.
auth|⇒ k config use-context custom-cert-context
Switched to context "custom-cert-context".
auth|⇒ k get pods
No resources found in default namespace.
auth|⇒ oc whoami
anyuser
Removing Changes (cycle the node-system-admin-ca)
To ‘deny’ authentication to any of the certificates created by the existing node-system-admin-ca
, we need to cycle the CA certificate and ensure the certificate is removed from the trusted certificate bundle.
For the node-system-admin-ca
Secret, we can achieve this by deleting the Secret. There is also an associated ConfigMap whic is used to contian all of the trusted certificates for the node-system-admin-ca
, which generally just includes all previous and current public certificates for the node-system-admin-ca
.
To cycle the certificate, the following commands can be used:
auth|⇒ oc delete -n openshift-kube-apiserver-operator secret/node-system-admin-signer
secret "node-system-admin-signer" deleted
auth|⇒ oc whoami --client-certificate ./anyuser.crt --client-key ./anyuser.key --server $SERVER
anyuser
auth|⇒ oc delete -n openshift-kube-apiserver-operator cm/node-system-admin-ca
configmap "node-system-admin-ca" deleted
test|⇒ oc whoami --client-certificate ./anyuser.crt --client-key ./anyuser.key --server $SERVER
kube:admin
NOTE that the anyuser
certificates can still be used for authentication after the node-system-admin-ca
Secret is removed. The CA public certificates are defined in the ConfigMap and in this case the OpenShift cluster still trusts the removed Secret.