This document will walk you through running Gardener on your local machine for development purposes. If you encounter difficulties, please open an issue so that we can make this process easier.
Gardener runs in any Kubernetes cluster. In this guide, we will start a KinD cluster which is used as both garden and seed cluster (please refer to the architecture overview) for simplicity.
The Gardener components, however, will be run as regular processes on your machine (hence, no container images are being built).
When developing Gardener on your local machine you might face several limitations:
- Your machine doesn't have enough compute resources (see prerequisites) for hosting a second seed cluster or multiple shoot clusters.
- Developing Gardener's IPv6 features requires a Linux machine and native IPv6 connectivity to the internet, but you're on macOS or don't have IPv6 connectivity in your office environment or via your home ISP.
In these cases, you might want to check out one of the following options that run the setup described in this guide elsewhere for circumventing these limitations:
- remote local setup: develop on a remote pod for more compute resources
- dev box on Google Cloud: develop on a Google Cloud machine for more compute resource and/or simple IPv4/IPv6 dual-stack networking
-
Make sure that you have followed the Local Setup guide up until the Get the sources step.
-
Make sure your Docker daemon is up-to-date, up and running and has enough resources (at least
4
CPUs and4Gi
memory; see here how to configure the resources for Docker for Mac).Please note that 4 CPU / 4Gi memory might not be enough for more than one
Shoot
cluster, i.e., you might need to increase these values if you want to run additionalShoot
s. If you plan on following the optional steps to create a second seed cluster, the required resources will be more - at least10
CPUs and16Gi
memory.Additionally, please configure at least
120Gi
of disk size for the Docker daemon.Tip: With
docker system df
anddocker system prune -a
you can cleanup unused data. -
Make sure the
kind
docker network is using the CIDR172.18.0.0/16
.- If the network does not exist, it can be created with
docker network create kind --subnet 172.18.0.0/16
- If the network already exists, the CIDR can be checked with
docker network inspect kind | jq '.[].IPAM.Config[].Subnet'
. If it is not172.18.0.0/16
, delete the network withdocker network rm kind
and create it with the command above.
- If the network does not exist, it can be created with
-
Make sure that you increase the maximum number of open files on your host:
-
On Mac, run
sudo launchctl limit maxfiles 65536 200000
-
On Linux, extend the
/etc/security/limits.conf
file with* hard nofile 97816 * soft nofile 97816
and reload the terminal.
-
make kind-up KIND_ENV=local
If you want to setup an IPv6 KinD cluster, use
make kind-up IPFAMILY=ipv6
instead.
This command sets up a new KinD cluster named gardener-local
and stores the kubeconfig in the ./example/gardener-local/kind/local/kubeconfig
file.
It might be helpful to copy this file to
$HOME/.kube/config
since you will need to target this KinD cluster multiple times. Alternatively, make sure to set yourKUBECONFIG
environment variable to./example/gardener-local/kind/local/kubeconfig
for all future steps viaexport KUBECONFIG=example/gardener-local/kind/local/kubeconfig
.
All following steps assume that you are using this kubeconfig.
Additionally, this command also deploys a local container registry to the cluster as well as a few registry mirrors, that are set up as a pull-through cache for all upstream registries Gardener uses by default.
This is done to speed up image pulls across local clusters.
The local registry can be accessed as localhost:5001
for pushing and pulling.
The storage directories of the registries are mounted to the host machine under dev/local-registry
.
With this, mirrored images don't have to be pulled again after recreating the cluster.
The command also deploys a default calico installation as the cluster's CNI implementation with NetworkPolicy
support (the default kindnet
CNI doesn't provide NetworkPolicy
support).
Furthermore, it deploys the metrics-server in order to support HPA and VPA on the seed cluster.
If you want to test IPv6-related features, we need to configure NAT for outgoing traffic from the kind network to the internet.
After make kind-up IPFAMILY=ipv6
, check the network created by kind:
$ docker network inspect kind | jq '.[].IPAM.Config[].Subnet'
"172.18.0.0/16"
"fc00:f853:ccd:e793::/64"
Determine which device is used for outgoing internet traffic by looking at the default route:
$ ip route show default
default via 192.168.195.1 dev enp3s0 proto dhcp src 192.168.195.34 metric 100
Configure NAT for traffic from the kind cluster to the internet using the IPv6 range and the network device from the previous two steps:
ip6tables -t nat -A POSTROUTING -o enp3s0 -s fc00:f853:ccd:e793::/64 -j MASQUERADE
In a terminal pane, run:
make dev-setup # preparing the environment (without webhooks for now)
kubectl wait --for=condition=ready pod -l run=etcd -n garden --timeout 2m # wait for etcd to be ready
make start-apiserver # starting gardener-apiserver
In a new terminal pane, run:
kubectl wait --for=condition=available apiservice v1beta1.core.gardener.cloud # wait for gardener-apiserver to be ready
make start-admission-controller # starting gardener-admission-controller
In a new terminal pane, run:
make dev-setup DEV_SETUP_WITH_WEBHOOKS=true # preparing the environment with webhooks
make start-controller-manager # starting gardener-controller-manager
(Optional): In a new terminal pane, run:
make start-scheduler # starting gardener-scheduler
In a new terminal pane, run:
make register-local-env # registering the local environment (CloudProfile, Seed, etc.)
make start-gardenlet SEED_NAME=local # starting gardenlet
In a new terminal pane, run:
make start-extension-provider-local # starting gardener-extension-provider-local
ℹ️ The provider-local
is started with elevated privileges since it needs to manipulate your /etc/hosts
file to enable you accessing the created shoot clusters from your local machine, see this for more details.
You can wait for the Seed
to become ready by running:
./hack/usage/wait-for.sh seed local GardenletReady Bootstrapped SeedSystemComponentsHealthy ExtensionsReady
Alternatively, you can run kubectl get seed local
and wait for the STATUS
to indicate readiness:
NAME STATUS PROVIDER REGION AGE VERSION K8S VERSION
local Ready local local 4m42s vX.Y.Z-dev v1.21.1
In order to create a first shoot cluster, just run:
kubectl apply -f example/provider-local/shoot.yaml
You can wait for the Shoot
to be ready by running:
NAMESPACE=garden-local ./hack/usage/wait-for.sh shoot local APIServerAvailable ControlPlaneHealthy ObservabilityComponentsHealthy EveryNodeReady SystemComponentsHealthy
Alternatively, you can run kubectl -n garden-local get shoot local
and wait for the LAST OPERATION
to reach 100%
:
NAME CLOUDPROFILE PROVIDER REGION K8S VERSION HIBERNATION LAST OPERATION STATUS AGE
local local local local 1.21.0 Awake Create Processing (43%) healthy 94s
(Optional): You could also execute a simple e2e test (creating and deleting a shoot) by running:
make test-e2e-local-simple KUBECONFIG="$PWD/example/gardener-local/kind/local/kubeconfig"
When the Shoot
got created successfully, you can acquire a kubeconfig
by using the shoots/adminkubeconfig
subresource to access the cluster.
There are cases where you would want to create a second seed cluster in your local setup. For example, if you want to test the control plane migration feature. The following steps describe how to do that.
If you are on macOS, add a new IP address on your loopback device which will be necessary for the new KinD cluster that you will create. On macOS, the default loopback device is lo0
.
sudo ip addr add 127.0.0.2 dev lo0 # adding 127.0.0.2 ip to the loopback interface
Next, setup the second KinD cluster:
make kind2-up KIND_ENV=local
This command sets up a new KinD cluster named gardener-local2
and stores its kubeconfig in the ./example/gardener-local/kind/local2/kubeconfig
file. You will need this file when starting the provider-local
extension controller for the second seed cluster.
make register-kind2-env # registering the local2 seed
make start-gardenlet SEED_NAME=local2 # starting gardenlet for the local2 seed
In a new terminal pane, run:
export KUBECONFIG=./example/gardener-local/kind/local2/kubeconfig # setting KUBECONFIG to point to second kind cluster
make start-extension-provider-local \
WEBHOOK_SERVER_PORT=9444 \
WEBHOOK_CERT_DIR=/tmp/gardener-extension-provider-local2 \
SERVICE_HOST_IP=127.0.0.2 \
METRICS_BIND_ADDRESS=:8082 \
HEALTH_BIND_ADDRESS=:8083 # starting gardener-extension-provider-local
If you want to perform a control plane migration you can follow the steps outlined in the Control Plane Migration topic to migrate the shoot cluster to the second seed you just created.
./hack/usage/delete shoot local garden-local
make tear-down-kind2-env
make kind2-down
make tear-down-local-env
make kind-down
Just like Prow is executing the KinD based integration tests in a K8s pod, it is possible to interactively run this KinD based Gardener development environment aka "local setup" in a "remote" K8s pod.
k apply -f docs/development/content/remote-local-setup.yaml
k exec -it deployment/remote-local-setup -- sh
tmux -u a
Please refer to the TMUX documentation for working effectively inside the remote-local-setup pod.
To access Grafana, Prometheus, or other components in a browser, two port forwards are needed:
The port forward from the laptop to the pod:
k port-forward deployment/remote-local-setup 3000
The port forward in the remote-local-setup pod to the respective component:
k port-forward -n shoot--local--local deployment/grafana 3000