Skip to content

Commit

Permalink
Merge pull request #147 from maryamtahhan/hotfix-gsg
Browse files Browse the repository at this point in the history
gsg: update the kind starter guide
  • Loading branch information
rootfs authored Apr 10, 2024
2 parents 1c028a7 + 5583e0e commit 0abae6f
Show file tree
Hide file tree
Showing 2 changed files with 136 additions and 68 deletions.
186 changes: 123 additions & 63 deletions docs/installation/kepler.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,62 +2,108 @@

## Getting Started

Before you deploy kepler make sure:
The following instructions work for both `Kind` and `Kubeadm` clusters.

- you have a Kubernetes cluster running. If you want to do local cluster set up [follow this](./local-cluster.md#install-kind)
- the Monitoring stack, i.e. Prometheus with Grafana is set up. [Steps here](#deploy-the-prometheus-operator)
### Prerequisites

> **Note**: The default Grafana deployment can be accessed with the credentials `admin:admin`. You can expose the web-based UI locally using:
1. You have a Kubernetes cluster running.

```sh
kubectl -n monitoring port-forward svc/grafana 3000
> **NOTE**: If you want to setup a kind cluster [follow this](./local-cluster.md#install-kind)
2. The Monitoring stack, i.e. Prometheus with Grafana is set up. [Steps here](#deploy-the-prometheus-operator)

> **Note**: The default Grafana deployment can be accessed with the credentials `admin:admin`. You
can expose the web-based UI locally using:

```console
# kubectl -n monitoring port-forward svc/grafana 3000
```

If the perquisites are met, then please proceed to the following sections.

### Deploying Kepler on a local kind cluster

To deploy Kepler on `kind`, we need to build it locally with specific flags. The full details of local
builds are covered in the [section below](#build-manifests). To deploy on a local `kind` cluster,
you need to use the `CI_DEPLOY` and `PROMETHEUS_DEPLOY` flags.

```console
# git clone --depth 1 [email protected]:sustainable-computing-io/kepler.git
# cd ./kepler
# make build-manifest OPTS="CI_DEPLOY PROMETHEUS_DEPLOY"
# kubectl apply -f _output/generated-manifest/deployment.yaml
```

### Running Kepler on a local kind cluster
### Deploying Kepler on a baremetal Kubeadm cluster

To run Kepler on `kind`, we need to build it locally with specific flags. The full details of local builds are covered in the [section below](#build-manifests). To deploy on a local `kind` cluster, you need to use the `CI_DEPLOY` and `PROMETHEUS_DEPLOY` flags.
To deploy Kepler on [Kubeadm][2], we need to build it locally with specific flags. The full details of local
builds are covered in the [section below](#build-manifests). To deploy on a local `Kubeadm` cluster,
you need to use the `BM_DEPLOY` and `PROMETHEUS_DEPLOY` flags.

```sh
git clone --depth 1 [email protected]:sustainable-computing-io/kepler.git
cd ./kepler
make build-manifest OPTS="CI_DEPLOY PROMETHEUS_DEPLOY"
kubectl apply -f _output/generated-manifest/deployment.yaml
```console
# git clone --depth 1 [email protected]:sustainable-computing-io/kepler.git
# cd ./kepler
# make build-manifest OPTS="BM_DEPLOY PROMETHEUS_DEPLOY"
# kubectl apply -f _output/generated-manifest/deployment.yaml
```

The following deployment will also create a service listening on port `9102`.
### Dashboard access

The deployment steps above will create a Kepler service listening on port `9102`.

> **Note**: If you followed the Kepler dashboard deployment steps, you can access the Kepler dashboard by navigating to [http://localhost:3000/](http://localhost:3000/) Login using `admin:admin`. Skip the window where Grafana asks to input a new password.
If you followed the Kepler dashboard deployment steps, you can access the Kepler
dashboard by navigating to [http://localhost:3000/](http://localhost:3000/) Login
using `admin:admin`. Skip the window where Grafana asks to input a new password.

![Grafana dashboard](../fig/grafana_dashboard.png)

> **Note**: To forward ports simply run:
```console
# kubectl port-forward --address localhost -n kepler service/kepler-exporter 9102:9102 &
# kubectl port-forward --address localhost -n monitoring service/prometheus-k8s 9090:9090 &
# kubectl port-forward --address localhost -n monitoring service/grafana 3000:3000 &
```

### Build manifests

First, fork the [kepler](https://github.com/sustainable-computing-io/kepler) repository and clone it.

If you want to use Redfish BMC and IPMI, you need to add Redfish and IPMI credentials of each of the kubelet node to the `redfish.csv` under the `kepler/manifests/config/exporter` directory. The format of the file is as follows:
If you want to use Redfish BMC and IPMI, you need to add Redfish and IPMI credentials of each of the
kubelet node to the `redfish.csv` under the `kepler/manifests/config/exporter` directory. The format of
the file is as follows:

```csv
kubelet_node_name_1,redfish_username_1,redfish_password_2,https://redfish_ip_or_hostname_1
kubelet_node_name_2,redfish_username_2,redfish_password_2,https://redfish_ip_or_hostname_2
```

where, `kubelet_node_name` in the first column is the name of the node where the kubelet is running. You can get the name of the node by running the following command:
where, `kubelet_node_name` in the first column is the name of the node where the kubelet is running.
You can get the name of the node by running the following command:

```bash
kubectl get nodes
```console
# kubectl get nodes
```

`redfish_username` and `redfish_password` in the second and third columns are the credentials to access the Redfish API from each node.
While `https://redfish_ip_or_hostname` in the fourth column is the Redfish endpoint in IP address or hostname.

Then, build the manifests file that suit your environment and deploy it with the following steps:

```bash
make build-manifest OPTS="<deployment options>"
# minimum deployment:
# > make build-manifest
# deployment with sidecar on openshift:
# > make build-manifest OPTS="ESTIMATOR_SIDECAR_DEPLOY OPENSHIFT_DEPLOY"
```console
# make build-manifest OPTS="<deployment options>"
```

Minimum deployment:

```console
# make build-manifest
```

Deployment with sidecar on openshift:

```console
# make build-manifest OPTS="ESTIMATOR_SIDECAR_DEPLOY OPENSHIFT_DEPLOY"
```

Manifests will be generated in `_output/generated-manifest/` by default.
Expand Down Expand Up @@ -92,48 +138,62 @@ REDFISH_SKIP_SSL_VERIFY|true|`true` if TLS verification is disabled on connectin

## Deploy the Prometheus operator

If Prometheus is already installed in the cluster, skip this step. Otherwise, follow these steps to install it.
If Prometheus is already installed in the cluster, skip this step. Otherwise, follow these steps
to install it.

1. Clone the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project to your local folder, and enter the `kube-prometheus` directory.
1. Clone the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project to
your local folder, and enter the `kube-prometheus` directory.

```sh
git clone --depth 1 https://github.com/prometheus-operator/kube-prometheus
cd kube-prometheus
```
```console
# git clone --depth 1 https://github.com/prometheus-operator/kube-prometheus; cd kube-prometheus;
```

2. This step is optional. You can later manually add the [Kepler Grafana dashboard][1] through the
Grafana UI. To automatically do that, fetch the `kepler-exporter` Grafana dashboard and inject in
the Prometheus Grafana deployment. This step uses [yq](https://github.com/mikefarah/yq), a YAML processor:

```sh
KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON=`curl -fsSL https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json | sed '1 ! s/^/ /'`
mkdir -p grafana-dashboards
cat - > ./grafana-dashboards/kepler-exporter-configmap.yaml << EOF
apiVersion: v1
data:
kepler-exporter.json: |-
$KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 9.5.3
name: grafana-dashboard-kepler-exporter
namespace: monitoring
EOF
yq -i e '.items += [load("./grafana-dashboards/kepler-exporter-configmap.yaml")]' ./manifests/grafana-dashboardDefinitions.yaml
yq -i e '.spec.template.spec.containers.0.volumeMounts += [ {"mountPath": "/grafana-dashboard-definitions/0/kepler-exporter", "name": "grafana-dashboard-kepler-exporter", "readOnly": false} ]' ./manifests/grafana-deployment.yaml
yq -i e '.spec.template.spec.volumes += [ {"configMap": {"name": "grafana-dashboard-kepler-exporter"}, "name": "grafana-dashboard-kepler-exporter"} ]' ./manifests/grafana-deployment.yaml
```
3. Finally, apply the objects in the `manifests` directory. This will create the `monitoring` namespace and CRDs, and then wait for them to be available before creating the remaining resources. During the `until` loop, a response of `No resources found` is to be expected. This statement checks whether the resource API is created but doesn't expect the resources to be there.
```sh
kubectl apply --server-side -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl apply -f manifests/
```
the Prometheus Grafana deployment.

```console
# KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON=`curl -fsSL https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json | sed '1 ! s/^/ /'`
# mkdir -p grafana-dashboards
# cat - > ./grafana-dashboards/kepler-exporter-configmap.yaml << EOF
apiVersion: v1
data:
kepler-exporter.json: |-
$KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 9.5.3
name: grafana-dashboard-kepler-exporter
namespace: monitoring
EOF
```

> **Note:** The next step uses [yq](https://github.com/mikefarah/yq), a YAML processor.
```console
# yq -i e '.items += [load("./grafana-dashboards/kepler-exporter-configmap.yaml")]' ./manifests/grafana-dashboardDefinitions.yaml
# yq -i e '.spec.template.spec.containers.0.volumeMounts += [ {"mountPath": "/grafana-dashboard-definitions/0/kepler-exporter", "name": "grafana-dashboard-kepler-exporter", "readOnly": false} ]' ./manifests/grafana-deployment.yaml
# yq -i e '.spec.template.spec.volumes += [ {"configMap": {"name": "grafana-dashboard-kepler-exporter"}, "name": "grafana-dashboard-kepler-exporter"} ]' ./manifests/grafana-deployment.yaml
```

3. Finally, apply the objects in the `manifests` directory. This will create the `monitoring`
namespace and CRDs, and then wait for them to be available before creating the remaining
resources. During the `until` loop, a response of `No resources found` is to be expected.
This statement checks whether the resource API is created but doesn't expect the resources
to be there.

```console
# kubectl apply --server-side -f manifests/setup
# until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
# kubectl apply -f manifests/
```

> **Note:** It takes a short time (in a Kind cluster), for all the pods and services to
reach a `running` state.

[1]:https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json
[2]:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
18 changes: 13 additions & 5 deletions docs/installation/local-cluster.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Local cluster setup

Kepler runs on Kubernetes. If you already have access to a cluster, you can skip this section. To deploy a local cluster, you can use [kind](https://kind.sigs.k8s.io/). `kind` is a tool for running local Kubernetes clusters using Docker container "nodes". It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Kepler runs on Kubernetes. If you already have access to a cluster, you can skip this section. To deploy a local cluster,
you can use [kind](https://kind.sigs.k8s.io/). `kind` is a tool for running local Kubernetes clusters using Docker container
"nodes". It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

## Install kind

Expand All @@ -26,11 +28,17 @@ nodes:
containerPath: /usr/src
```
We can then spin up a cluster with:
We can then spin up a cluster with either:
```sh
export $CLUSTER_NAME="my-cluster" # we can use the --name flag to override the name in our config
kind create cluster --name=$CLUSTER_NAME --config=./local-cluster-config.yaml
```console
# export $CLUSTER_NAME="my-cluster" # we can use the --name flag to override the name in our config
# kind create cluster --name=$CLUSTER_NAME --config=./local-cluster-config.yaml
```

or simply by running:

```console
# make cluster-up
```

Note that `kind` automatically switches your current `kubeconfig` context to the newly created cluster.
Expand Down

0 comments on commit 0abae6f

Please sign in to comment.