☀️ Light Mode

Kubernetes Authentication with Okta OIDC

Kubernetes Authentication with Okta OIDC

Kubernetes Authentication with Okta OIDC

Replacing certificate-based auth with centralized identity — no more manual cert generation

In a previous post I covered Kubernetes RBAC from scratch — creating users with certificates, binding them to groups, and storing credentials in Vault. That approach works but it has friction: every new user needs a cert generated, a CSR submitted, approved, and distributed. It doesn’t scale and there’s no central place to revoke access instantly.

This post picks up where that one left off. The RBAC configuration stays exactly the same — ClusterRoles, ClusterRoleBindings, group-based permissions — but authentication moves to Okta via OIDC. Users log in with their existing Okta credentials and their group membership in Okta drives what they can do in the cluster.

What Changes vs Certificate Auth

Certificate AuthOIDC / Okta
User identityCN field in certToken claims from Okta
Group membershipO field in certGroups claim in token
User managementManual cert generationManaged in Okta
Revoking accessCert expiration or CRLDisable in Okta — immediate
RBACClusterRole/BindingSame — no changes needed

How It Works

Kubernetes doesn’t manage users itself — it delegates authentication to external systems. OIDC is one of those systems. When a user runs kubectl, a plugin called kubelogin handles the browser-based login flow with Okta and returns a signed JWT token. kubectl passes that token to the API server, which validates it against Okta’s public keys and reads the claims to determine who the user is and what groups they belong to.

User runs kubectl
       │
       ▼
kubelogin (exec credential in kubeconfig)
       │
       ▼
Browser opens → Okta login page
       │
       ▼
User authenticates (password + MFA)
       │
       ▼
Okta issues JWT token:
  sub: unique user ID
  groups: ["naxslabs-admins"]
       │
       ▼
kubectl sends request to API server with token
       │
       ▼
API server validates token → reads groups claim
→ matches ClusterRoleBinding → request allowed or denied

OIDC is Additive

Adding OIDC doesn’t replace certificate-based auth. Your existing kubernetes-admin cert keeps working alongside it. Always keep that around as a safety net — if Okta goes down or something breaks in the OIDC config, you can still get into your cluster with the admin kubeconfig.

Okta App Setup

Create a new app in Okta for the Kubernetes cluster.

Applications → Create App Integration:

  • Sign-in method: OIDC – OpenID Connect
  • Application type: Native Application

Why Native, Not Web?

kubelogin is a CLI tool that runs locally on the user’s machine. Native Application type enables the PKCE flow, which is the secure authentication method for CLI tools that can’t safely store a client secret. No client secret is needed or used.

Grant types: Authorization Code only. Enable Require PKCE. Set Client Authentication to None.

Sign-in redirect URI:

http://localhost:8000

This is where kubelogin spins up a local server to catch the token after Okta redirects back. It’s HTTP but only ever listens on loopback — the token never travels over an unencrypted network connection.

Assignments: Limit access to your admin group — don’t leave this open to everyone in the org.

Configure the Groups Claim

Okta doesn’t include group membership in tokens by default. You have to explicitly add it.

Security → API → Authorization Servers → default → Claims → Add Claim:

FieldValue
Namegroups
Include in token typeID Token (Always)
Value typeGroups
FilterMatches regex .*
Include inAny scope

Authorization Server Access Policy

The groups claim and the app assignment aren’t enough on their own. The Okta authorization server has its own access policy that must also allow the request. Without this you’ll get a no_matching_policy error in the Okta system log even though everything else looks correct.

Security → API → Authorization Servers → default → Access Policies → Add Policy: assign it to your K8S app, add a rule allowing your group with Authorization Code grant type.

Installing kubelogin

# Via krew
kubectl krew install oidc-login

# Or direct binary
curl -LO https://github.com/int128/kubelogin/releases/latest/download/kubelogin_linux_amd64.zip
unzip kubelogin_linux_amd64.zip
mv kubelogin /usr/local/bin/kubectl-oidc_login

Test the Token Before Touching the API Server

Before making any changes to the cluster, verify the Okta flow works and check what’s in the token:

Your Issuer URL

If you’ve configured a custom domain in Okta (under Customizations → Domain), use that — for example https://auth.yourdomain.com/oauth2/default. If not, use your Okta integrator address directly: https://your-integrator-id.okta.com/oauth2/default. Note that the custom domain must also be set as the issuer in your authorization server settings (Security → API → Authorization Servers → default) otherwise the issuer URL in the token won’t match and Kubernetes will reject it.

kubectl oidc-login setup \
  --oidc-issuer-url=https://your-okta-domain/oauth2/default \
  --oidc-client-id=your-client-id \
  --listen-address=localhost:8000

This opens a browser, completes the login, and prints the decoded token. Verify the groups claim is present and contains the right groups before proceeding. If it’s missing, the Okta claim configuration needs fixing first.

Configuring the Kubernetes API Server

The API server needs to know where to validate tokens. Edit the static pod manifest on each control plane node — kube-apiserver is a static pod managed locally on each node, not a cluster-wide resource, so this can’t be done with kubectl.

sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml

Find the command: section and add these four flags:

- --oidc-issuer-url=https://auth.yourdomain.com/oauth2/default
- --oidc-client-id=your-client-id
- --oidc-username-claim=sub
- --oidc-groups-claim=groups

The API server restarts automatically when the manifest is saved. Give it about a minute then verify the flags loaded:

ps aux | grep kube-apiserver | grep oidc

Why sub and Not email for Username Claim

Okta’s ID token doesn’t include email by default — you’d need to add it as a separate claim. Using sub (the unique Okta user ID like 00uyqenacur2MEKEp697) is simpler and works fine because Kubernetes permissions are group-based, not username-based. The username is only used for audit logging. If you want a human-readable username, add an email or preferred_username claim to the Okta authorization server and update the flag accordingly.

Configuring kubeconfig

kubectl config set-credentials oidc \
  --exec-api-version=client.authentication.k8s.io/v1 \
  --exec-command=kubectl \
  --exec-arg=oidc-login \
  --exec-arg=get-token \
  --exec-arg="--oidc-issuer-url=https://auth.yourdomain.com/oauth2/default" \
  --exec-arg="--oidc-client-id=your-client-id" \
  --exec-arg="--listen-address=localhost:8000" \
  --exec-interactive-mode=IfAvailable

kubectl config set-context oidc \
  --cluster=kubernetes \
  --user=oidc \
  --namespace=default

kubectl config use-context oidc

After switching context, the next kubectl command automatically triggers the browser login. The token is cached so subsequent commands don’t re-prompt until it expires (controlled by Okta token lifetime, default 1 hour).

RBAC — Nothing Changes

The ClusterRoles and ClusterRoleBindings from the previous post work unchanged. The only difference is group membership now comes from Okta instead of the cert O field. A binding like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: your-group-binding
subjects:
- kind: Group
  name: naxslabs-admins
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: admin-role
  apiGroup: rbac.authorization.k8s.io

Works because Okta puts naxslabs-admins in the token’s groups claim, the API server reads it, and matches it against this binding. No changes to RBAC needed when switching from cert auth to OIDC.

Onboarding New Users

The End-User Experience

New users get a pre-configured kubeconfig distributed to them — nothing to set up manually. They run kubectl get pods, a browser opens, they login with their Okta credentials, and they’re in. The kubeconfig contains the issuer URL and client ID but neither is sensitive — there’s no client secret in this setup.

Store the kubeconfig in Vault and users pull it themselves:

# Admin stores it
vault kv put secret/kubernetes/kubeconfig config=@~/.kube/config

# User pulls it
vault kv get -field=config secret/kubernetes/kubeconfig > ~/.kube/config
kubectl config use-context oidc
kubectl get pods   # browser login, done

Authentication and Session Policies Still Apply

Getting OIDC working is only part of the picture. Okta’s authentication policies and session policies still govern how users prove their identity before a token is ever issued — and they apply to every app including your Kubernetes integration.

Don’t Skip This Step

It’s easy to get OIDC working with a permissive policy and forget to tighten it up afterward. The authentication policy attached to your Kubernetes app determines what factors are required, how often users must re-authenticate, and under what conditions access is granted or stepped up.

Things to consider for your Kubernetes app specifically:

  • MFA requirements — decide whether password alone is sufficient or whether a second factor (Okta Verify, WebAuthn, TOTP) is required for cluster access
  • Re-authentication frequency — Okta can require re-auth every session, or allow a longer window. For cluster access you likely want this tighter than a general SaaS app
  • Network zone restrictions — consider limiting cluster authentication to known IP ranges or VPN if your cluster isn’t public-facing
  • Device trust — if you have Okta Verify with device trust configured, you can require a managed or trusted device before issuing tokens

Okta Policy Layers — All Three Must Allow the Request

There are three separate policy checks in Okta that all need to pass before a token is issued. A common source of confusion is getting one right and not realizing the others are blocking:

  • App assignment — is this user or group assigned to the app?
  • Authentication policy — does the user meet the factor requirements attached to this app?
  • Authorization server access policy — does the authorization server have a policy that covers this app and allows the grant type?

All three must pass. Failing any one of them results in an access denied error even if the other two are configured correctly.

Troubleshooting

ErrorCauseFix
no_matching_policy in Okta logsAuthorization server access policy missing for this appSecurity → API → Authorization Servers → default → Access Policies
claim not present in API server logsToken missing the claim specified in --oidc-username-claimUse sub — always present in Okta tokens
Unauthorized after successful Okta loginAPI server rejecting token — check API server logskubectl logs -n kube-system kube-apiserver-km01 | grep -i oidc
Issuer URL mismatch erroriss claim in token doesn’t match --oidc-issuer-url exactlyCheck Authorization Server issuer URL in Okta matches the flag
Browser opens but token not cachedOld cache from failed attemptsrm -rf ~/.kube/cache/oidc-login

Key Takeaways

OIDC with Okta eliminates the operational overhead of certificate-based user management while making access control more immediate and auditable. The RBAC layer stays exactly the same — only the authentication mechanism changes.

  • OIDC is additive — cert-based admin access keeps working alongside it
  • The API server validates tokens, kubelogin fetches them — two separate concerns
  • Use sub for username claim — email isn’t in Okta ID tokens by default
  • Three Okta policy layers must all be correct: app assignment, authentication policy, and authorization server access policy
  • Disable a user in Okta and they immediately lose cluster access — no waiting for cert expiry
  • End users never touch the OIDC config — they get a pre-built kubeconfig and just login

NAXS Labs
Logo