Skip to content

Latest commit

 

History

History
324 lines (241 loc) · 14.1 KB

resource-level-authorization-uma.md

File metadata and controls

324 lines (241 loc) · 14.1 KB

User guide: Resource-level authorization with User-Managed Access (UMA) resource registry

Fetch resource metadata relevant for your authorization policies from Keycloak authorization clients, using User-Managed Access (UMA) protocol.

Authorino capabilities featured in this guide:

Check out as well the user guides about OpenID Connect Discovery and authentication with JWTs and Open Policy Agent (OPA) Rego policies.

For further details about Authorino features in general, check the docs.


Requirements

  • Kubernetes server with permissions to install cluster-scoped resources (operator, CRDs and RBAC)
  • Identity Provider (IdP) that implements OpenID Connect authentication and OpenID Connect Discovery (e.g. Keycloak)
  • jq, to extract parts of JSON responses

If you do not own a Kubernetes server already and just want to try out the steps in this guide, you can create a local containerized cluster by executing the command below. In this case, the main requirement is having Kind installed, with either Docker or Podman.

kind create cluster --name authorino-tutorial

Deploy the identity provider and authentication server by executing the command below. For the examples in this guide, we are going to use a Keycloak server preloaded with all required realm settings.

kubectl create namespace keycloak
kubectl -n keycloak apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/keycloak/keycloak-deploy.yaml

The next steps walk you through installing Authorino, deploying and configuring a sample service called Talker API to be protected by the authorization service.

Using Kuadrant

If you are a user of Kuadrant and already have your workload cluster configured and sample service application deployed, as well as your Gateway API network resources applied to route traffic to your service, skip straight to step ❺.

At step ❺, instead of creating an AuthConfig custom resource, create a Kuadrant AuthPolicy one. The schema of the AuthConfig's spec matches the one of the AuthPolicy's, except spec.host, which is not available in the Kuadrant AuthPolicy. Host names in a Kuadrant AuthPolicy are inferred automatically from the Kubernetes network object referred in spec.targetRef and route selectors declared in the policy.

For more about using Kuadrant to enforce authorization, check out Kuadrant auth.


❶ Install the Authorino Operator (cluster admin required)

The following command will install the Authorino Operator in the Kubernetes cluster. The operator manages instances of the Authorino authorization service.

curl -sL https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/utils/install.sh | bash -s

❷ Deploy Authorino

The following command will request an instance of Authorino as a separate service1 that watches for AuthConfig resources in the default namespace2, with TLS disabled3.

kubectl apply -f -<<EOF
apiVersion: operator.authorino.kuadrant.io/v1beta1
kind: Authorino
metadata:
  name: authorino
spec:
  listener:
    tls:
      enabled: false
  oidcServer:
    tls:
      enabled: false
EOF

❸ Deploy the Talker API

The Talker API is a simple HTTP service that echoes back in the response whatever it gets in the request. We will use it in this guide as the sample service to be protected by Authorino.

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml

❹ Setup Envoy

The following bundle from the Authorino examples deploys the Envoy proxy and configuration to wire up the Talker API behind the reverse-proxy, with external authorization enabled with the Authorino instance.4

kubectl apply -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml

The command above creates an Ingress with host name talker-api.127.0.0.1.nip.io. If you are using a local Kubernetes cluster created with Kind, forward requests from your local port 8000 to the Envoy service running inside the cluster:

kubectl port-forward deployment/envoy 8000:8000 2>&1 >/dev/null &

❺ Create an AuthConfig

Create an Authorino AuthConfig custom resource declaring the auth rules to be enforced:

This example of resource-level authorization leverages part of Keycloak's User-Managed Access (UMA) support. Authorino will fetch resource attributes stored in a Keycloak resource server client.

The Keycloak server also provides the identities. The sub claim of the Keycloak-issued ID tokens must match the owner of the requested resource, identified by the URI of the request.

Kuadrant users – Remember to create an AuthPolicy instead of an AuthConfig. For more, see Kuadrant auth.

Create a required secret that will be used by Authorino to initiate the authentication with the UMA registry.

kubectl apply -f -<<EOF
apiVersion: v1
kind: Secret
metadata:
  name: talker-api-uma-credentials
stringData:
  clientID: talker-api
  clientSecret: 523b92b6-625d-4e1e-a313-77e7a8ae4e88
type: Opaque
EOF

Create the config:

kubectl apply -f -<<EOF
apiVersion: authorino.kuadrant.io/v1beta3
kind: AuthConfig
metadata:
  name: talker-api-protection
spec:
  hosts:
  - talker-api.127.0.0.1.nip.io
  authentication:
    "keycloak-kuadrant-realm":
      jwt:
        issuerUrl: http://keycloak.keycloak.svc.cluster.local:8080/realms/kuadrant
  metadata:
    "resource-data":
      uma:
        endpoint: http://keycloak.keycloak.svc.cluster.local:8080/realms/kuadrant
        credentialsRef:
          name: talker-api-uma-credentials
  authorization:
    "owned-resources":
      opa:
        rego: |
          COLLECTIONS = ["greetings"]

          http_request = input.context.request.http
          http_method = http_request.method
          requested_path_sections = split(trim_left(trim_right(http_request.path, "/"), "/"), "/")

          get { http_method == "GET" }
          post { http_method == "POST" }
          put { http_method == "PUT" }
          delete { http_method == "DELETE" }

          valid_collection { COLLECTIONS[_] == requested_path_sections[0] }

          collection_endpoint {
            valid_collection
            count(requested_path_sections) == 1
          }

          resource_endpoint {
            valid_collection
            some resource_id
            requested_path_sections[1] = resource_id
          }

          identity_owns_the_resource {
            identity := input.auth.identity
            resource_attrs := object.get(input.auth.metadata, "resource-data", [])[0]
            resource_owner := object.get(object.get(resource_attrs, "owner", {}), "id", "")
            resource_owner == identity.sub
          }

          allow { get;    collection_endpoint }
          allow { post;   collection_endpoint }
          allow { get;    resource_endpoint; identity_owns_the_resource }
          allow { put;    resource_endpoint; identity_owns_the_resource }
          allow { delete; resource_endpoint; identity_owns_the_resource }
EOF

The OPA policy owned-resource above enforces that all users can send GET and POST requests to /greetings, while only resource owners can send GET, PUT and DELETE requests to /greetings/{resource-id}.

❻ Obtain access tokens with the Keycloak server and consume the API

Obtain an access token as John and consume the API

Obtain an access token for user John (owner of the resource /greetings/1 in the UMA registry):

The AuthConfig deployed in the previous step is suitable for validating access tokens requested inside the cluster. This is because Keycloak's iss claim added to the JWTs matches always the host used to request the token and Authorino will later try to match this host to the host that provides the OpenID Connect configuration.

Obtain an access token from within the cluster:

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=john' -d 'password=p' -d 'scope=openid' | jq -r .access_token)

If your Keycloak server is reachable from outside the cluster, feel free to obtain the token directly. Make sure the host name set in the OIDC issuer endpoint in the AuthConfig matches the one used to obtain the token and is as well reachable from within the cluster.

As John, send requests to the API:

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings
# HTTP/1.1 200 OK

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings/1
# HTTP/1.1 200 OK

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/1
# HTTP/1.1 200 OK

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/2 -i
# HTTP/1.1 403 Forbidden

Obtain an access token as Jane and consume the API

Obtain an access token for user Jane (owner of the resource /greetings/2 in the UMA registry):

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=jane' -d 'password=p' -d 'scope=openid' | jq -r .access_token)

As Jane, send requests to the API:

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings
# HTTP/1.1 200 OK

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings/1 -i
# HTTP/1.1 403 Forbidden

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/1 -i
# HTTP/1.1 403 Forbidden

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/2
# HTTP/1.1 200 OK

Obtain an access token as Peter and consume the API

Obtain an access token for user Peter (does not own any resource in the UMA registry):

ACCESS_TOKEN=$(kubectl run token --attach --rm --restart=Never -q --image=curlimages/curl -- http://keycloak.keycloak.svc.cluster.local:8080/realms/kuadrant/protocol/openid-connect/token -s -d 'grant_type=password' -d 'client_id=demo' -d 'username=peter' -d 'password=p' -d 'scope=openid' | jq -r .access_token)

As Jane, send requests to the API:

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings
# HTTP/1.1 200 OK

curl -H "Authorization: Bearer $ACCESS_TOKEN" http://talker-api.127.0.0.1.nip.io:8000/greetings/1 -i
# HTTP/1.1 403 Forbidden

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/1 -i
# HTTP/1.1 403 Forbidden

curl -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE http://talker-api.127.0.0.1.nip.io:8000/greetings/2 -i
# HTTP/1.1 403 Forbidden

Cleanup

If you have started a Kubernetes cluster locally with Kind to try this user guide, delete it by running:

kind delete cluster --name authorino-tutorial

Otherwise, delete the resources created in each step:

kubectl delete authconfig/talker-api-protection
kubectl delete secret/talker-api-uma-credentials
kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/envoy/envoy-notls-deploy.yaml
kubectl delete -f https://raw.githubusercontent.com/kuadrant/authorino-examples/main/talker-api/talker-api-deploy.yaml
kubectl delete authorino/authorino
kubectl delete namespace keycloak

To uninstall the Authorino Operator and manifests (CRDs, RBAC, etc), run:

kubectl delete -f https://raw.githubusercontent.com/Kuadrant/authorino-operator/main/config/deploy/manifests.yaml

Footnotes

  1. In contrast to a dedicated sidecar of the protected service and other architectures. Check out Architecture > Topologies for all options.

  2. namespaced reconciliation mode. See Cluster-wide vs. Namespaced instances.

  3. For other variants and deployment options, check out Getting Started, as well as the Authorino CRD specification.

  4. For details and instructions to setup Envoy manually, see Protect a service > Setup Envoy in the Getting Started page. If you are running your ingress gateway in Kubernetes and wants to avoid setting up and configuring your proxy manually, check out Kuadrant.