K8s Access Control: Structured Authorization Configuration
Why it matters? | The big picture | Configuration | Use case | User workflow
Why it matters?
The Structured Authorization Configuration feature now allows configuring the API Server's authorization mechanism using an authorization config file. It enables features difficult to maintain or not available via command-line arguments, including multiple webhooks, webhook failure policy, pre-filter rules, and enabling fine grained control like an explicit deny authorizer.
- Supports CEL rules for pre-filtering requests, preventing unnecessary calls to webhooks.
- Automatically reloads the authorizer chain when the configuration file is modified.
- Authorizors are checked in the order specified, giving higher priority to earlier modules for allowing or denying requests.
For more detail on the motivation and goals of the proposal, see SIG-Auth's KEP-3221 | 2023 KubeCon NA Session
The big picture
Once an API request has been authenticated by the api-server's authentication modules (see prior posts), the request then moves to the authorization phase next, where we find out whether the user can access a particular resource or non-resource (generic URL). All parts of an API request must be allowed by some authorization mechanism in order to proceed.
NOTE: Access controls and policies that depend on specific fields of specific kinds of objects are handled by admission controllers and happens after authorization has completed. (I'll do a future post and lab about this)
At this point the request is now looked at by the api server as a new kind of object called a SubjectAccessReview (SAR) | Go Docs.
The SubjectAccessReview is used to evaluate authorization decisions for a given subject (e.g. userInfo) checking if they can perform a specific action. The request specifies either resource attributes (like pods, deployments) or non-resource URLs (like /healthz), and returns an allow/deny decision.
Example of a SubjectAccessReview:
{
"apiVersion": "authorization.k8s.io/v1",
"kind": "SubjectAccessReview",
"metadata": {
"creationTimestamp": null
},
"spec": {
"groups": [
"jit-edit",
"system:authenticated"
],
"resourceAttributes": {
"namespace": "production",
"resource": "pods",
"verb": "list",
"version": "v1"
},
"uid": "auth0|67590e55c55a1727d677a24a",
"user": "jonny@runtime.diaz"
},
"status": {
"allowed": false
}
}
{
"apiVersion": "authorization.k8s.io/v1",
"kind": "SubjectAccessReview",
"metadata": {
"creationTimestamp": null
},
"spec": {
"groups": [
"priv:view",
"system:authenticated"
],
"resourceAttributes": {
"namespace": "production",
"resource": "pods",
"verb": "list",
"version": "v1"
},
"uid": "auth0|67599db66921394fe476efb4",
"user": "jonny@runtime.diaz"
},
"status": {
"allowed": false
}
}
Configuration
The api server can be configured with multiple authorization modules either by the CLI flags or now the structured authorization configuration file. When multiple authorization modules are configured, each is checked in sequence. If any authorizer approves or denies a request, that decision is immediately returned and no other authorizer is consulted. If all modules have no opinion on the request, then the request is denied. An overall deny verdict means that the API server rejects the request and responds with an HTTP 403 (Forbidden) status.
NOTE: Both modes is not allowed.
CLI flag mode:
--authorization-mode=Node,RBAC,Webhook
Structured authorization configuration file mode:
--authorization-config=/path/to/config/file
Structured authorization configuration breakdown
# Authorizers is an ordered list of authorizers to authorize requests against.
# Must be at least one.
apiVersion: apiserver.config.k8s.io/v1
kind: AuthorizationConfiguration
authorizers:
- type: Node
name: node
- type: RBAC
name: rbac
Type Node and RBAC sets the same built-in authorizers configurable with the flag mode.
Example with webhook
---
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
authorizers:
- type: Node
name: node
- type: Webhook
name: opa-webhook-1
webhook:
authorizedTTL: 30s
unauthorizedTTL: 30s
timeout: 3s
subjectAccessReviewVersion: v1
matchConditionSubjectAccessReviewVersion: v1
failurePolicy: Deny
connectionInfo:
type: KubeConfigFile
kubeConfigFile: /etc/kubernetes/api-server/webhook-kubeconfig.yaml
matchConditions:
# only send resource requests to the webhook
- expression: "(has(request.resourceAttributes))"
# only intercept requests to production
- expression: "(request.resourceAttributes.namespace == 'production')"
# don't intercept requests from kube-system service accounts
- expression: "!('system:serviceaccounts:kube-system' in request.groups)"
- type: RBAC
name: rbac
Type Webhook is where the new feature shines. Configure none, one or many.
- Define a unique name for the webhook
- Set TTL timers for caching requests
- Set a webhook timeout
- Define the SAR version (v1beta1 or v1)
- Provide connectionInfo to the webhook
- Define a failurePolicy (Deny or NoOpinion)
- Define matchConditions using CEL to send request to this webhook
NOTE:
- ALL API requests are sent to the authorizer modules
- Watch out for circular dependency issues.
- In this demo, without proper
matchConditions
, bootstrapping a fresh Kind cluster caused control-plane components' requests to be blocked by the non-existent OPA webhook, resulting in connection refused errors and preventing cluster creation.
Details for each field: K8s docs | K8s blog | Source code
Use case
For the purpose of this demo, I've configured the webhook authorizer as a classic OPA instance using a simple Rego policy, deployed within the cluster (explore more within the opa
namespace). A few things to highlight:
- Notice the webhook is placed after the Node and before the RBAC authorizers
- The OPA instance must be deployed to the control-plane node
- A NodePort service is used with a static port
- The static node port (e.g., 30001 in
https://localhost:30001/v0/data/k8s/authz/decision
) is used in the webhook's kubeConfigFile server setting.
Example Rego policy:
package k8s.authz
import rego.v1
deny contains reason if {
not input.spec.user in {"kubernetes-admin", "system:kube-scheduler"}
input.spec.resourceAttributes.namespace == "production"
required_groups := {"system:authenticated", "jit-edit"}
provided_groups := {group | some group in input.spec.groups}
// & Returns the intersection of two sets.
count(required_groups & provided_groups) != count(required_groups)
reason := sprintf("OPA: provided groups (%v) does not include all required groups: (%v)", [
concat(", ", provided_groups),
concat(", ", required_groups),
])
}
decision := {
"apiVersion": input.apiVersion,
"kind": "SubjectAccessReview",
"status": {
"denied": count(deny) >= 1,
"allowed": count(deny) == 0,
"reason": concat(" | ", deny),
},
}
User workflow
- Let's leverage the structured authentication config from a previous post to authenticate to the api server.
- Then lets perform kubectl actions against the predefined
production
namespace:- first with the user
jonny@runtime.diaz
successfully authenticated against the Salsa IDP - then with the user
jonny@runtime.diaz
successfully authenticated against the Amapiano IDP
- first with the user
- Note the OPA webhook authorizer (Rego policy) will allow users in groups
{"system:authenticated", "jit-edit"}
and the built-in cluster-admin and super-admin users. Otherwise the webhook will response with a status denied in the SAR object.
Access playground here: Fire up the custom iximiuz authN & authZ playground.
Execute user authentication flow against Salsa IdP
laborant@docker-01:~$ ./user-oauth-workflow-2.sh
Then perform a kubectl action against the production
namespace:
laborant@docker-01:~$ kubectl --user=jonny@runtime.diaz -n production get pods
Notice the response is denied by the OPA webhook authorizer:
Error from server (Forbidden): pods is forbidden: User "jonny@runtime.diaz" cannot list resource "pods" in API group "" in the namespace "production": OPA: provided groups (priv:view, system:authenticated) does not include all required groups: (jit-edit, system:authenticated)
Execute user authentication flow against Amapiano IdP
laborant@docker-01:~$ ./user-oauth-workflow-1.sh
Then perform a kubectl action against the production
namespace:
laborant@docker-01:~$ kubectl --user=jonny@runtime.diaz -n production get pods
Notice the response the running pods (thereby allowed by the OPA webhook authorizer):
NAME READY STATUS RESTARTS AGE
nginx-workload 1/1 Running 0 28m