When managing a Kubernetes cluster, eventually you need to give someone access without handing over too much control. Maybe it’s a team member who needs to deploy workloads, or you want a restricted account that can’t accidentally nuke the entire cluster. This covers the process end to end — creating users, scoping permissions, and storing credentials in Vault.
Authentication vs Authorization
This is the most important concept before anything else. In Kubernetes these are two completely separate things:
Authentication
Proving who you are. Handled by certificates. The API server reads the CN field as the username and O as the group.
Authorization
What you’re allowed to do. Handled by RBAC. Changing permissions never requires touching certs — they’re completely independent.
Understanding RBAC
Kubernetes RBAC revolves around four objects:
Role
Permissions scoped to a specific namespace.
ClusterRole
Permissions that apply cluster-wide across all namespaces.
RoleBinding
Attaches a Role to a user or group within a namespace.
ClusterRoleBinding
Attaches a ClusterRole to a user or group across the entire cluster.
There are no deny rules. If you want to restrict something, simply don’t grant it. Whatever isn’t listed, isn’t allowed. If a user has multiple bindings, all permissions are combined — you can’t take something away with a second binding.
Built-in ClusterRoles
Before creating custom roles, know what Kubernetes already ships with. You can inspect any of them with kubectl get clusterrole admin -o yaml and use them as a starting point.
| Role | Access Level |
|---|---|
cluster-admin | Full access to everything — essentially root |
admin | Broad access, can manage namespace-level RBAC, cannot touch nodes or delete namespaces |
edit | Same as admin but read-only on RBAC resources |
view | Read-only across most resources |
Discovering API Groups and Resources
Every resource in Kubernetes belongs to an API group. Before writing roles you need to know what groups and resources exist:
kubectl get apiservice
Then drill into a specific group:
kubectl api-resources --api-group=apps
kubectl api-resources --api-group=batch
kubectl api-resources --api-group=networking.k8s.io
kubectl api-resources --api-group="" # core resources — pods, services, configmaps etc
The NAMESPACED column tells you whether a resource needs a Role or a ClusterRole. The full API group name must be exact — networking returns nothing, networking.k8s.io works.
Verbs
| Verb | What it does | kubectl equivalent |
|---|---|---|
get | Read a specific resource | kubectl get |
list | List all instances | kubectl get |
watch | Watch for real-time changes | kubectl get --watch |
create | Create new instances | kubectl create, kubectl apply |
update | Update existing instances | kubectl replace, kubectl apply |
patch | Partially update a resource | kubectl patch |
delete | Delete a resource | kubectl delete |
deletecollection | Delete multiple instances | kubectl delete --all |
Creating the ClusterRole
For this lab we duplicated the built-in admin role and named it admin-role. The key omission — no RBAC resources. This user can’t create or modify roles and bindings, so there’s no way to escalate their own permissions.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: admin-role
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "pods/portforward", "services",
"configmaps", "secrets", "persistentvolumeclaims", "serviceaccounts"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "daemonsets", "statefulsets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
kubectl apply -f admin-role.yml
Need to add or remove permissions? Edit the ClusterRole — no cert changes needed. The cert handles authentication, the ClusterRole handles authorization. They’re completely independent.
kubectl edit clusterrole admin-role
Groups in Kubernetes
Kubernetes doesn’t have group objects you create. Groups are derived from the O (Organization) field embedded in a user’s certificate. When a user authenticates, Kubernetes reads their cert and uses CN as the username and O as the group.
Bind a role to a group once, and any user with that O value in their cert inherits the permissions automatically. If you have 10 devs, give them all O=naxslabs-admins in their certs and they all get the same permissions from a single binding.
Creating the ClusterRoleBinding
Bind to the group rather than individual users. Any future user created with O=naxslabs-admins in their cert gets the same permissions without touching this file:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: naxslabs-admins-binding
subjects:
- kind: Group
name: naxslabs-admins
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin-role
apiGroup: rbac.authorization.k8s.io
You can test permissions before generating a cert. The binding just waits for anyone matching the group to show up:
kubectl auth can-i get pods --as-group=naxslabs-admins --as=darnell
Generating User Certificates
Kubernetes only cares about two fields in the CSR subject — CN becomes the username, O becomes the group. Everything else is ignored. A script handles the whole process:
#!/bin/bash
user=$1
org=$2
openssl genrsa -out $user.key 2048
openssl req -new -key $user.key -out $user.csr \
-subj "/C=US/ST=RI/L=Providence/O=$org/CN=$user"
cat $user.csr | base64 | tr -d "\n" > $user-encoded.csr
sed "s/KEY/`cat $user-encoded.csr`/; s/USER/$user/" csr-template.yml > $user-csr.yml
bash certgen.sh darnell naxslabs-admins
You’ll end up with three files: the private key, the CSR, and a ready-to-apply Kubernetes manifest.
CSR Template
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: USER
spec:
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
request: KEY
Submit and Approve
kubectl apply -f darnell-csr.yml
kubectl certificate approve darnell
kubectl get csr darnell -o jsonpath='{.status.certificate}' | base64 -d > darnell.crt
Don’t automate the approval step. It’s a security checkpoint — someone should be reviewing and explicitly approving each certificate request. Auto-approving CSRs defeats the purpose.
Storing Credentials in HashiCorp Vault
Flat cert files sitting on disk are a liability. Store them in Vault instead:
vault kv put secret/kubernetes/users/darnell \
[email protected] \
[email protected]
The @ prefix tells Vault to read the file contents rather than treat the value as a literal string. Without it, Vault stores the text darnell.crt instead of the actual certificate data.
Configuring kubeconfig Without Touching Disk
Once the certs are in Vault, configure kubeconfig using process substitution — credentials never hit disk:
kubectl config set-credentials darnell \
--client-certificate=<(vault kv get -field=cert secret/kubernetes/users/darnell) \
--client-key=<(vault kv get -field=key secret/kubernetes/users/darnell) \
--embed-certs=true
The <() process substitution passes Vault output directly as a file descriptor. --embed-certs=true bakes the cert data into the kubeconfig so there's no external file dependency.
Set the Context
kubectl config set-context darnell \
--cluster=kubernetes \
--user=darnell \
--namespace=default
kubectl config use-context darnell
Distributing Credentials via Vault
Instead of handing cert files to team members, give each user a Vault policy scoped to their own path. Users log in to Vault with their own token, pull their cert and key, and configure their own kubeconfig. Every read is logged by Vault — full auditability with no files being emailed or shared.
vault policy write darnell-policy - <<EOF
path "secret/kubernetes/users/darnell/*" {
capabilities = ["read"]
}
EOF
Each user gets their own policy scoped to their path. Admins never touch raw cert files after initial generation, and Vault's audit log gives you a complete record of every credential access.
Rolling Back
kubectl delete clusterrolebinding naxslabs-admins-binding
kubectl delete clusterrole admin-role
kubectl delete csr darnell
rm darnell.key darnell.crt darnell-encoded.csr darnell-csr.yml
Namespace-Scoped Access
Everything above applies cluster-wide, but the same approach works at the namespace level for tighter control — a contractor who should only deploy to staging, or a junior dev who shouldn't touch production. Swap ClusterRole for Role, ClusterRoleBinding for RoleBinding, and add a namespace field:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: staging-deployer
namespace: staging
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: staging-deployer-binding
namespace: staging
subjects:
- kind: User
name: contractor
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: staging-deployer
apiGroup: rbac.authorization.k8s.io
The cert generation, CSR process, and Vault storage are all identical. A Role only grants permissions within the namespace it's created in — valid cert or not, nothing outside that namespace is accessible.
Key Points
- Certs handle authentication, RBAC handles authorization — completely independent of each other
- Use groups over individual user bindings — bind once, add users by cert
- RBAC is additive only — restrict access by simply not granting it
- Embed certs into kubeconfig so file paths don't become a dependency
- Store credentials in Vault and use process substitution to keep them off disk entirely
- Built-in ClusterRoles are a solid starting point — inspect them before building from scratch
