- Introduction
- Prerequisites
- Installation and setup
- Next steps
- Upgrading
- Teardown and cleanup
- Further reading
This document walks you through setting up a Google Kubernetes Engine (GKE) cluster and installing the Bitnami Kubernetes Production Runtime (BKPR) on the cluster.
- Google Cloud account
- Google Cloud Platform (GCP) project
- Kubernetes Engine API should be enabled
- Google Cloud DNS API should be enabled
- Google Cloud SDK
- Kubernetes CLI
- BKPR installer
kubecfg
jq
In addition to the requirements listed above, a domain name is also required for setting up Ingress endpoints to services running in the cluster. The specified domain name can be a top-level domain (TLD) or a subdomain. In either case you have to manually set up the NS records for the specified TLD or subdomain so as to delegate DNS resolution queries to a Google Cloud DNS zone created and managed by BKPR.
BKPR on GKE uses domain-based authorization, specified in GCLOUD_AUTHZ_DOMAIN
, for verifying users as they login to the Elasticsearch, Kibana and Grafana dashboards. As such you need a G Suite account configured set up for your authorization domain to enable users to login with their G Suite user accounts.
In this section, you will deploy a Google Kubernetes Engine (GKE) cluster using the gcloud
CLI.
-
Authenticate the
gcloud
CLI with your Google Cloud account:gcloud auth login
-
Set the Google Cloud application default credentials:
gcloud auth application-default login
-
Configure the following environment variables:
export BKPR_DNS_ZONE="my-domain.com" export GCLOUD_USER="$(gcloud info --format='value(config.account)')" export GCLOUD_PROJECT="my-gce-project" export GCLOUD_ZONE="us-east1-d" export GCLOUD_AUTHZ_DOMAIN="my-domain.com" export GCLOUD_K8S_CLUSTER="my-gke-cluster" export GCLOUD_K8S_VERSION="1.11"
BKPR_DNS_ZONE
specifies the DNS suffix for the externally-visible websites and services deployed in the cluster. A TLD or a sub-domain may be used.GCLOUD_USER
specifies the email address used to authenticate to Google Cloud Platform.GCLOUD_PROJECT
specifies the Google Cloud project id.gcloud projects list
lists your Google Cloud projects.GCLOUD_ZONE
specifies the Google Cloud zone.gcloud compute zones list
lists the Google Cloud zones.GCLOUD_AUTHZ_DOMAIN
specifies the email domain of authorized users and needs to be a G Suite domain.GCLOUD_K8S_CLUSTER
specifies the name of the GKE cluster.GCLOUD_K8S_VERSION
specifies the version of Kubernetes to use for creating the cluster. The BKPR Kubernetes version support matrix lists the base Kubernetes versions supported by BKPR.gcloud container get-server-config --project ${GCLOUD_PROJECT} --zone ${GCLOUD_ZONE}
lists the versions available in your region.
-
Create an OAuth Client ID by following these steps:
- Go to https://console.developers.google.com/apis/credentials.
- Select the project from the drop down menu.
- In the center pane, select the OAuth consent screen tab.
- Enter a Application name .
- Add the TLD of the domain specified in the
BKPR_DNS_ZONE
variable to the Authorized domains and Save the changes. - Choose the Credentials tab and select the Create Credentials > OAuth client ID .
- Select the Web application option and fill in a name.
- Finally, add the following redirect URIs and hit Create .
Replace
${BKPR_DNS_ZONE}
with the value of theBKPR_DNS_ZONE
environment variable*
Specify the displayed OAuth client id and secret in the GCLOUD_OAUTH_CLIENT_KEY
and GCLOUD_OAUTH_CLIENT_SECRET
environment variables, for example:
export GCLOUD_OAUTH_CLIENT_KEY="xxxxxxx.apps.googleusercontent.com"
export GCLOUD_OAUTH_CLIENT_SECRET="xxxxxx"
-
Set the default project:
gcloud config set project ${GCLOUD_PROJECT}
-
Set the default region:
gcloud config set compute/zone ${GCLOUD_ZONE}
-
Create the GKE cluster:
gcloud container clusters create ${GCLOUD_K8S_CLUSTER} \ --project ${GCLOUD_PROJECT} \ --num-nodes 3 \ --machine-type n1-standard-2 \ --zone ${GCLOUD_ZONE} \ --cluster-version ${GCLOUD_K8S_VERSION}
-
Create a
cluster-admin
role binding:kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=${GCLOUD_USER}
To bootstrap your Kubernetes cluster with BKPR:
kubeprod install gke \
--email "${GCLOUD_USER}" \
--dns-zone "${BKPR_DNS_ZONE}" \
--project "${GCLOUD_PROJECT}" \
--oauth-client-id "${GCLOUD_OAUTH_CLIENT_KEY}" \
--oauth-client-secret "${GCLOUD_OAUTH_CLIENT_SECRET}" \
--authz-domain "${GCLOUD_AUTHZ_DOMAIN}"
Wait for all the pods in the cluster to enter Running
state:
kubectl get pods -n kubeprod
If you want to bootstrap the cluster from scratch after a failed run, you should remove kubeprod-manifest.jsonnet
file.
The kubeprod-autogen.json
file stores sensitive information. Do not commit this file in a GIT repository.
BKPR creates and manages a Cloud DNS zone which is used to map external access to applications and services in the cluster. However, for it to be usable, you need to configure the NS records for the zone.
Query the name servers of the zone with the following command and configure the records with your domain registrar.
BKPR_DNS_ZONE_NAME=$(gcloud dns managed-zones list --filter dnsName:${BKPR_DNS_ZONE} --format='value(name)')
gcloud dns record-sets list \
--zone ${BKPR_DNS_ZONE_NAME} \
--name ${BKPR_DNS_ZONE} --type NS \
--format=json | jq -r .[].rrdatas
The following screenshot illustrates the NS record configuration on a DNS registrar when a subdomain is used.
Please note, it can take a while for the DNS changes to propogate.
After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting https://prometheus.${BKPR_DNS_ZONE}
, https://kibana.${BKPR_DNS_ZONE}
and https://grafana.${BKPR_DNS_ZONE}
respectively.
Replace
${BKPR_DNS_ZONE}
with the value of theBKPR_DNS_ZONE
environment variable*
Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.
Follow the installation guide to update the BKPR installer binary to the latest release.
Edit the kubeprod-manifest.jsonnet
file that was generated by kubeprod install
and update the version referred in the import
statement. For example, the following snippet illustrates the changes required in the kubeprod-manifest.jsonnet
file if you're upgrading to version v1.1.0
from version v1.0.0
.
// Cluster-specific configuration
-(import "https://releases.kubeprod.io/files/v1.0.0/manifests/platforms/gke.jsonnet") {
+(import "https://releases.kubeprod.io/files/v1.1.0/manifests/platforms/gke.jsonnet") {
config:: import "kubeprod-autogen.json",
// Place your overrides here
}
Re-run the kubeprod install
command, from the Deploy BKPR step, in the directory containing the existing kubeprod-autogen.json
and updated kubeprod-manifest.jsonnet
files.
kubecfg delete kubeprod-manifest.jsonnet
kubectl wait --for=delete ns/kubeprod --timeout=300s
BKPR_DNS_ZONE_NAME=$(gcloud dns managed-zones list --filter dnsName:${BKPR_DNS_ZONE} --format='value(name)')
gcloud dns record-sets import /dev/null --zone ${BKPR_DNS_ZONE_NAME} --delete-all-existing
gcloud dns managed-zones delete ${BKPR_DNS_ZONE_NAME}
GCLOUD_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "displayName:${BKPR_DNS_ZONE} AND email:bkpr-edns" --format='value(email)')
gcloud projects remove-iam-policy-binding ${GCLOUD_PROJECT} \
--member=serviceAccount:${GCLOUD_SERVICE_ACCOUNT} \
--role=roles/dns.admin
gcloud iam service-accounts delete ${GCLOUD_SERVICE_ACCOUNT}
gcloud container clusters delete ${GCLOUD_K8S_CLUSTER}