Watches a remote clusters nodes and services for changes and updates local clusters services and endpoints accordingly.
Developed and used to keep service reachable via in-cluser URLs from multiple clusters. This is "hacked" by creating dummy services in cluster A pointing to node IPs and ports of cluster B.
Currently the authentication towards the remote cluster is tightened to Google Kubernetes Engine clusters.
- remote(-cluster) is always the cluster who's nodes and services are being watched
- local(-cluster) is always the cluster to manage services and endpoints in
Barrelman consists of two different controller routines, watching for different events in remote-cluster.
This only handles service with the label tfw.io/barrelman
set to "true"
or "managed-resource"
(see ServiceController for the latter).
Watch for changes of service objects in local-cluster:
- Add/Modify: Add or update a matching endpoint object
- Internal IPs from up to date list of nodes in remote-cluster
- Port from
targetPort
of service
- Delete: Do nothing (kubernetes will clean up the endpoint automatically)
Watch for changes of nodes in remote-cluster:
- Add: Queue all service objects in local-cluster for endpoint updates
- Modify: Queue all service objects in local-cluster for endpoint updates
- Delete: Queue all service objects in local-cluster for endpoint updates
ServiceController operates on services in remote-cluster if they are not within a ignored namespace
(--ignore-namespace
, kube-system
is ignored by default) and not ignored via annotation
(tfw.io/barrelman: ignore
).
Services in local-cluster are only updated/deleted if they are labeled with
(tfw.io/barrelman: managed-resource
). Namespaces created by barrelman are never removed.
Watch for changes of service objects in local-cluster:
- Add/Modify: do nothing
- Delete: Check if there is a corresponding service in remote-cluster and add a dummy as needed
Watch for changes of services objects in remote-cluster:
- Add: Create a dummy service in local-cluster (to be picked up by NodeEndpointController)
- Create namespace in local-cluster (if needed)
- Create service in local-cluster if it does not exist, update as in modify if it does
- Modify: Update corresponding service object in local-cluster
- All service ports of the remote service
- Delete: Remove dummy service in local-cluster if it was created by barrelman
Services in local-cluster will be created with type ClusterIP by default. If you want to them to be type NodePort
instead, run barrelman with the -nodeportsvc
switch (the services will maintain the same NodePort as in
remote-cluster).
Imaging there is cluster X and Y (Nodes Xn and Yn) with barrelman running as Xb and Yb.
- Create Service "foo/baz" (barrelman label, targetPort == NodePort of some service in X) in Y
- Endpoint(s) "foo/baz" are created in Y (pointing to Xn)
- Change targetPort of "foo/baz" in Y
- Endpoint(s) "foo/baz" are in Y are updated accordingly
- Create Service "foo/bar" (type NodePort) in X
- Namespace "foo" is created in Y
- Service "foo/bar" (Type: ClusterIP, targetPort == NodePort of "foo/bar" in X) is created in Y
- Endpoint(s) "foo/bar" are created in Y (pointing to Xn:nodePort)
- Change NodePort of service "foo/bar" in X
- targetPort of service "foo/bar" in Y is updated accordingly
- Endpoint(s) "foo/bar" in Y are updated accordingly
- Delete Service "foo/bar" in X
- Service "foo/bar" in Y is deleted
- Endpoint(s) "foo/bar" in Y are deleted
Local cluster may be specified via local-kubeconfig
and local-context
. If omitted, in-cluster credentials will
be used (where possible).
Remote cluster must be defined via remote-project
, remote-zone
and remote-cluster-name
. Cluster credentials and
config (API Host etc.) will then be auto generated via a Google APIs using the service account provided via the
environment Variable GOOGLE_APPLICATION_CREDENTIALS
.
barrelman -v 3 \
-local-kubeconfig ~/.kube/config \
-local-context "gke_gcp-project_region-and-zone_local-cluster-name" \
-remote-cluster-name remote-cluster-name \
-resync-period 1m
See rbac.yaml
Needs service account with "Kubernetes Engine Viewer" IAM permission (to read node and service details).
To create a service account, use:
PROJECT="gcp-project"
gcloud --project="$PROJECT" iam service-accounts create barrelman --display-name barrelman
# Grant Kubernetes Engine Viewer permission
gcloud projects add-iam-policy-binding $PROJECT \
--member serviceAccount:barrelman@${PROJECT}.iam.gserviceaccount.com --role "roles/container.viewer"
# Create a service account key (to be used in CI/CD)
gcloud iam service-accounts keys create service-account.json \
--iam-account=barrelman@${PROJECT}.iam.gserviceaccount.com
# Base64 encode the service account, store the output in GitLab CI variable REMOTE_SERVICE_ACCOUNT
base64 -w0 < service-account.json
#!/bin/bash
STAGED_GO_FILES=$(git diff --cached --name-only | grep ".go$")
if [[ "$STAGED_GO_FILES" = "" ]]; then
exit 0
fi
exec golangci-lint run --fix