Warning
|
Even though the latest kubespray repo deploys this by default, this guide has not yet been updated reflect that. We deployed with the below approach. If your cluster already has the dashboard installed, please feel free to skip this section. |
kubectl apply -f kubernetes-dashboard/
from
this
amazing guide !
If you want to direct logs to an elasticsearch cluster, there is a simple option in kubespray to enable this - WHICH I AM NOT USING.
You’re supposed to edit file: my_inventory/group_vars/k8s-cluster.yml
and find the efk_enabled
attribute, making sure it is set to true.
# Monitoring apps for k8s efk_enabled: true
However, we found a couple of things we didn’t like with this approach:
1. elasticsearch and kibana versions used are still too old (2.4.x)
1. they’re deployed in the kube-system
namespace. We’d like them in their separate namespace.
1. kibana
is deployed with a KIBANA_BASE_URL
attribute, so it can be served using kubectl proxy
, but I don’t want it limited that way.
Even so, here is how we got this to work:
After playbook has finished, you need to manually change a couple of things.
-
Edit the Kibana deployment, and remove the
KIBANA_BASE_URL
attribute. -
Edit the Kibana service, changing the type from
ClusterIP
toNodePort
, so you can access it from outside the cluster. Also
--- apiVersion: v1 kind: Service metadata: name: kibana namespace: kube-system spec: type: NodePort selector: k8s-app: kibana-logging ports: - port: 5601 nodePort: 30601
-
You should now be able to access k8s_node_ip:30601
In search of a better solution, we came across this
very
interesting guide, however we were not able to deploy the Logging
solution,
due to an issue with the persistent volume.
In the end, this only worked after having set up Dynamic Provisioning through EBS Volumes.
You need a couple of things:
-
Set up a K8s
secret
for authentication to the registry -
Define the private registry docker image on the pod
-
(Optional) HTTP access
Normally, you would use docker login
, but in the Kubernetes world, the solution
is to use the imagePullSecrets
attribute on the Pod
.
The way this works is that when trying to download the image, it will attempt to
authenticate with the docker registry, using a Kubernetes Secret
that you must
have previously created.
$ kubectl create secret \
docker-registry \
myregistrykey \
--docker-server=DOCKER_REGISTRY_SERVER \
--docker-username=DOCKER_USER \
--docker-password=DOCKER_PASSWORD \
--docker-email=DOCKER_EMAIL
Great, now you can proceed with defining the Pod
that will consume this secret
when pulling down the image from the private registry. Please note the
imagePullSecrets
attribute below.
spec: containers: - args: - -r image: docker.mcore.cloud:5005/mcore/logstash:5.5.2 name: logstash resources: {} readinessProbe: httpGet: path: / port: 9600 initialDelaySeconds: 120 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 volumeMounts: - mountPath: /usr/share/logstash/config name: logstash-config-volume - mountPath: /usr/share/logstash/pipeline/jdbc.conf name: logstash-pipeline-volume subPath: jdbc.conf - mountPath: /usr/share/logstash/pipeline/sql name: logstash-sql-volume - mountPath: /usr/share/logstash/pipeline/lastrun name: logstash-pvc imagePullSecrets: - name: cytech-gitlab-key