☀️ Light Mode

Kubernetes RBAC: Creating Users, Groups, and Securing Credentials with Vault

Kubernetes RBAC: Scoped Access, User Certificates, and Vault Integration

Kubernetes RBAC

Scoped Access, User Certificates, and Vault Integration

When managing a Kubernetes cluster, eventually you might need to give someone — or yourself — access without handing over too much control. Maybe it’s a team member who needs to deploy workloads, or you want a restricted account that can’t accidentally nuke your entire cluster.

Goals for This Lab

Before starting this lab I wanted to achieve three things: understand the process for creating new users, scope their permissions appropriately, and see how HashiCorp Vault fits into the workflow.

Authentication vs Authorization

This is the most important concept to understand before anything else. In Kubernetes these are two completely separate things:

Authentication

Proving who you are. Handled by certificates. The API server reads the CN field as the username and O as the group.

Authorization

What you’re allowed to do. Handled by RBAC. Changing permissions never requires touching certs, and creating a new cert doesn’t give anyone permissions until a binding exists.

Understanding RBAC

Kubernetes RBAC revolves around four objects:

Role

Permissions scoped to a specific namespace.

ClusterRole

Permissions that apply cluster-wide across all namespaces.

RoleBinding

Attaches a Role to a user or group within a namespace.

ClusterRoleBinding

Attaches a ClusterRole to a user or group across the entire cluster.

RBAC is Additive Only

There are no deny rules. If you want to restrict something, simply don’t grant it. Whatever isn’t listed, isn’t allowed. This also means if a user has multiple bindings, all permissions are combined — you can’t take something away with a second binding.

Built-in ClusterRoles

Before creating custom roles, it’s worth knowing what Kubernetes already ships with. You can inspect any of them to understand what they grant and use them as a starting point.

For example, kubectl get clusterrole admin -o yaml will list permissions for the admin role. You can output this to a file with > newfile.yml to modify it and create a new ClusterRole, which is what I’ll demonstrate shortly.

RoleAccess Level
cluster-adminFull access to everything — essentially root
adminBroad access, can manage namespace-level RBAC, cannot touch nodes or delete namespaces
editSame as admin but read-only on RBAC resources
viewRead-only across most resources

Discovering API Groups and Resources

Every resource in Kubernetes belongs to an API group. Before writing roles you need to know what groups and resources exist on your cluster:

kubectl get apiservice

Then drill into a specific group to see its resources:

kubectl api-resources --api-group=apps
kubectl api-resources --api-group=batch
kubectl api-resources --api-group=networking.k8s.io
kubectl api-resources --api-group=""    # core resources — pods, services, configmaps etc

The NAMESPACED Column

The NAMESPACED column in the output tells you whether a resource is namespace-scoped (needs a Role) or cluster-scoped (needs a ClusterRole). Note that the full API group name must be exact — networking returns nothing, but networking.k8s.io works.

Verbs

Verbs are the actions you can grant on any resource:

VerbWhat it doeskubectl equivalent
getRead a specific resourcekubectl get
listList all instanceskubectl get
watchWatch for real-time changeskubectl get --watch
createCreate new instanceskubectl create, kubectl apply
updateUpdate existing instanceskubectl replace, kubectl apply
patchPartially update a resourcekubectl patch
deleteDelete a resourcekubectl delete
deletecollectionDelete multiple instanceskubectl delete --all

Creating the ClusterRole

For this lab we duplicated the built-in admin role as a starting point and named it admin-role. In practice you can use any built-in role as a base, create one from scratch, or just bind directly to a built-in role entirely. The important thing is understanding what you’re granting.

Below is a slimmed down version of the admin ClusterRole. The key omission — no RBAC resources. This user simply cannot create or modify roles and bindings, so there’s no way to escalate their own permissions.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: admin-role
rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "pods/log", "pods/portforward", "services",
              "configmaps", "secrets", "persistentvolumeclaims", "serviceaccounts"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "daemonsets", "statefulsets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
  resources: ["jobs", "cronjobs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
  resources: ["ingresses", "networkpolicies"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "list", "watch"]
kubectl apply -f admin-role.yml

Modifying Permissions Later

Need to add or remove permissions? Just edit the ClusterRole — no cert changes needed. The cert handles authentication, the ClusterRole handles authorization. They’re completely independent.

kubectl edit clusterrole admin-role

Groups in Kubernetes

Kubernetes doesn’t have group objects you create. Groups are derived from the O (Organization) field embedded in a user’s certificate. When a user authenticates, Kubernetes reads their cert and uses CN as the username and O as the group.

Why Use Groups?

Bind a role to a group once, and any user with that O value in their cert inherits the permissions automatically. No binding changes needed when adding new team members. If you have 10 devs, give them all O=naxslabs-admins in their certs and they all get the same permissions from a single binding.

Creating the ClusterRoleBinding

Rather than binding to an individual user, bind to the group. Any future user created with O=naxslabs-admins in their cert gets the same permissions without touching this file:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: naxslabs-admins-binding
subjects:
- kind: Group
  name: naxslabs-admins
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: admin-role
  apiGroup: rbac.authorization.k8s.io

Verify Before Creating Certs

You can test permissions before even generating a cert. The binding just waits for anyone matching the group or username to show up:

kubectl auth can-i get pods --as-group=naxslabs-admins --as=darnell

Generating User Certificates

Rather than running commands manually every time, a script handles the whole process. Kubernetes only cares about two fields in the CSR subject — CN becomes the username, O becomes the group. Everything else (country, state, city) is ignored.

#!/bin/bash
user=$1
org=$2

openssl genrsa -out $user.key 2048
openssl req -new -key $user.key -out $user.csr \
  -subj "/C=US/ST=RI/L=Providence/O=$org/CN=$user"
cat $user.csr | base64 | tr -d "\n" > $user-encoded.csr
sed "s/KEY/`cat $user-encoded.csr`/; s/USER/$user/" csr-template.yml > $user-csr.yml
bash certgen.sh darnell naxslabs-admins

The script generates the key, CSR, base64 encodes it, and injects it into a Kubernetes CSR manifest automatically. You’ll end up with three files: the private key, the CSR, and a ready-to-apply Kubernetes manifest.

CSR Template

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: USER
spec:
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
  request: KEY

Submit and Approve

kubectl apply -f darnell-csr.yml
kubectl certificate approve darnell
kubectl get csr darnell -o jsonpath='{.status.certificate}' | base64 -d > darnell.crt

Certificate Approval is Intentionally Manual

Don’t automate the approval step. It’s a security checkpoint — someone should be reviewing and explicitly approving each certificate request. Auto-approving CSRs in a script defeats the purpose of the process.

Storing Credentials in HashiCorp Vault

Flat cert files sitting on disk are a liability. Store them in Vault instead:

vault kv put secret/kubernetes/users/darnell \
  [email protected] \
  [email protected]

The @ Prefix

The @ prefix tells Vault to read the file contents rather than treat the value as a literal string. Without it, Vault would just store the text darnell.crt instead of the actual certificate data.

Configuring kubeconfig Without Touching Disk

Once the certs are in Vault, configure kubeconfig using process substitution — credentials never hit disk:

kubectl config set-credentials darnell \
  --client-certificate=<(vault kv get -field=cert secret/kubernetes/users/darnell) \
  --client-key=<(vault kv get -field=key secret/kubernetes/users/darnell) \
  --embed-certs=true

The <() process substitution passes Vault output directly as a file descriptor. --embed-certs=true bakes the cert data directly into the kubeconfig so it's self-contained — no external file dependency that can break if files move or get deleted.

Set the Context

kubectl config set-context darnell \
  --cluster=kubernetes \
  --user=darnell \
  --namespace=default

kubectl config use-context darnell

Distributing Credentials to Users via Vault

A Better Distribution Model

Instead of handing cert files to team members, give each user a Vault policy scoped only to their own path. Users login to Vault with their own token, pull their cert and key, and configure their own kubeconfig. Every read is logged by Vault — full auditability with no files being emailed or shared around.

vault policy write darnell-policy - <<EOF
path "secret/kubernetes/users/darnell/*" {
  capabilities = ["read"]
}
EOF

This scales cleanly — each user gets their own policy scoped to their path, admins never touch raw cert files after initial generation, and Vault's audit log gives you a complete record of every credential access.

Rolling Back

To remove everything cleanly:

kubectl delete clusterrolebinding naxslabs-admins-binding
kubectl delete clusterrole admin-role
kubectl delete csr darnell
rm darnell.key darnell.crt darnell-encoded.csr darnell-csr.yml

Applying This at the Namespace Level

Everything covered in this post applies cluster-wide using ClusterRoles and ClusterRoleBindings — but the same exact approach works at the namespace level if you want tighter control. Say you have a contractor who should only be able to deploy to the staging namespace, or a junior dev who shouldn't touch production. Just swap ClusterRole for Role, ClusterRoleBinding for RoleBinding, and add a namespace field to the metadata:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: staging-deployer
  namespace: staging
rules:
- apiGroups: [""]
  resources: ["pods", "services", "configmaps"]
  verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: staging-deployer-binding
  namespace: staging
subjects:
- kind: User
  name: contractor
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: staging-deployer
  apiGroup: rbac.authorization.k8s.io

Everything Else Stays the Same

The cert generation, CSR process, and Vault storage are all identical. The only difference is scope — a Role only grants permissions within the namespace it's created in, so even if someone has a valid cert, they can't touch anything outside that namespace.

Key Takeaways

You now have a solid foundation for managing Kubernetes access at scale — scoped permissions, certificate-based auth, and credentials safely stored in Vault rather than sitting on disk.

🔐 Certificate Auth  •  👥 Group-Based Permissions  •  🔒 Vault-Backed Credentials

  • Certs handle authentication, RBAC handles authorization — completely independent of each other
  • Use groups over individual user bindings — bind once, add users by cert
  • RBAC is additive only — restrict access by simply not granting it
  • Embed certs into kubeconfig so file paths don't become a dependency
  • Store credentials in Vault and use process substitution to keep them off disk entirely
  • The built-in ClusterRoles are a solid starting point — inspect them before building from scratch

Next Steps:

  • Explore namespace-scoped Roles for finer-grained access control
  • Set up Vault policies for each team member
  • Audit existing ClusterRoleBindings in your cluster
  • Consider automating cert generation as part of your onboarding process

NAXS Labs
Logo