diff --git a/.github/workflows/super-linter.yaml b/.github/workflows/super-linter.yaml
index 213d0799..f59b2a48 100644
--- a/.github/workflows/super-linter.yaml
+++ b/.github/workflows/super-linter.yaml
@@ -16,13 +16,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
- uses: actions/checkout@v3
+ uses: actions/checkout@v4
with:
# Full git history is needed to get a proper list of changed files within `super-linter`
fetch-depth: 0
- name: Lint Code Base
- uses: github/super-linter@v4
+ uses: super-linter/super-linter@v6.3.0
env:
VALIDATE_ALL_CODEBASE: false
DEFAULT_BRANCH: "main"
diff --git a/README.md b/README.md
index 3fbe4899..f50f7f4b 100644
--- a/README.md
+++ b/README.md
@@ -1,11 +1,13 @@
# Documentation for Kepler-Doc
-Follow https://sustainable-computing.io/ to see documentation
+Follow [sustainable-computing.io](https://sustainable-computing.io/) to see documentation
## Install MkDocs
+
**Requirements:**
+
- Python 3.8
-
+
```bash
pip install -r requirements.txt
```
@@ -13,20 +15,23 @@ pip install -r requirements.txt
## Rendering adopters
- uses gomplate 3.11.4, either install it or use tea.xyz:
-```
-sh <(curl https://tea.xyz) +gomplate.ca^v3.11.4 sh
-```
+
+ ```sh
+ sh <(curl https://tea.xyz) +gomplate.ca^v3.11.4 sh
+ ```
+
- template adopters via:
-```
-gomplate -d adopters=./data/adopters.yaml -f templates/adopters.md -o docs/project/adopters.md
-```
+
+ ```sh
+ gomplate -d adopters=./data/adopters.yaml -f templates/adopters.md -o docs/project/adopters.md
+ ```
## Commands
-* `mkdocs new [dir-name]` - Create a new project.
-* `mkdocs serve` - Start the live-reloading docs server.
-* `mkdocs build` - Build the documentation site.
-* `mkdocs -h` - Print help message and exit.
+- `mkdocs new [dir-name]` - Create a new project.
+- `mkdocs serve` - Start the live-reloading docs server.
+- `mkdocs build` - Build the documentation site.
+- `mkdocs -h` - Print help message and exit.
## Layout
@@ -43,11 +48,12 @@ GitHub codespaces [provides a generous free tier](https://github.com/features/co
1. Click "Create codespace on main"
1. A new tab will open and your environment will be built
1. Create `virtualenv` to install `mkdocs`
-```bash
-virtualenv .venv
-source .venv/bin/activate
-pip install -r requirements.txt
-```
+
+ ```bash
+ virtualenv .venv
+ source .venv/bin/activate
+ pip install -r requirements.txt
+ ```
1. Once built, type `mkdocs serve`
1. A box will appear informing you that the site is available on port `8000`. Click the link to view the site
diff --git a/docs/design/architecture.md b/docs/design/architecture.md
index ce2ca74c..ff889be2 100644
--- a/docs/design/architecture.md
+++ b/docs/design/architecture.md
@@ -1,19 +1,21 @@
# Components
## Kepler Exporter
+
Kepler Exporter exposes a variety of metrics about the energy consumption of Kubernetes components such as Pods and Nodes.
Monitor container power consumption with the [metrics](metrics.md) made available by the Kepler Exporter.
-![](https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/doc/kepler-arch.png)
+![Kepler Architecture](https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/doc/kepler-arch.png)
## Kepler Model Server
+
The main feature of `Kepler Model Server` is to return a [power estimation model](../kepler_model_server/power_estimation.md) corresponding to the request containing target granularity (node in total, node per each processor component, pod in total, pod per each processor component), available input metrics, model filters such as accuracy.
-In addition, the online-trainer can be deployed as a sidecar container to the server (main container) to execute trainning pipelines and update the model on the fly when power metrics are available.
+In addition, the online-trainer can be deployed as a sidecar container to the server (main container) to execute training pipelines and update the model on the fly when power metrics are available.
`Kepler Estimator` is a client module to kepler model server running as a sidecar of Kepler Exporter (main container).
-This python will serve a PowerReequest from model package in Kepler Exporter as defined in estimator.go via unix domain socket `/tmp/estimator.sock`.
+This python will serve a PowerRequest from model package in Kepler Exporter as defined in estimator.go via unix domain socket `/tmp/estimator.sock`.
-Check us out on GitHub ➡️ [Kepler Model Server](https://github.com/sustainable-computing-io/kepler-model-server)
\ No newline at end of file
+Check us out on GitHub ➡️ [Kepler Model Server](https://github.com/sustainable-computing-io/kepler-model-server)
diff --git a/docs/design/ebpf_in_kepler.md b/docs/design/ebpf_in_kepler.md
index 20514fe6..31dff72f 100644
--- a/docs/design/ebpf_in_kepler.md
+++ b/docs/design/ebpf_in_kepler.md
@@ -1,37 +1,43 @@
# eBPF in Kepler
## Contents
- - [Background](#background)
- - [What is eBPF ?](#what-is-ebpf)
- - [What is a kprobe?](#what-is-a-kprobe)
- - [How to list all currently registered kprobes ?](#list-kprobes)
- - [Hardware CPU Events Monitoring](#hardware-cpu-events-monitoring)
- - [How to check if kernel supports perf_event_open?](#check-support-perf_event_open)
- - [Kernel routine probed by kepler](#kernel-routine-probed-by-kepler)
- - [Hardware CPU events monitored by Kepler](#hardware-cpu-events-monitored-by-kepler)
- - [Calculate process (aka task) total CPU time](#calculate-total-cpu-time)
- - [Calculate task CPU cycles](#calculate-total-cpu-cycle)
- - [Calculate task Ref CPU cycles](#calculate-total-cpu-ref-cycle)
- - [Calculate task CPU instructions](#calculate-total-cpu-instr)
- - [Calculate task Cache misses](#calculate-total-cpu-cache-miss)
- - [Calculate 'On CPU Average Frequency'](#calculate-on-cpu-avg-freq)
- - [Process Table](#process-table)
- - [References](#references)
+
+- [Background](#background)
+ - [What is eBPF ?](#what-is-ebpf)
+ - [What is a kprobe?](#what-is-a-kprobe)
+ - [How to list all currently registered kprobes ?](#list-kprobes)
+ - [Hardware CPU Events Monitoring](#hardware-cpu-events-monitoring)
+ - [How to check if kernel supports perf_event_open?](#check-support-perf_event_open)
+- [Kernel routine probed by kepler](#kernel-routine-probed-by-kepler)
+- [Hardware CPU events monitored by Kepler](#hardware-cpu-events-monitored-by-kepler)
+- [Calculate process (aka task) total CPU time](#calculate-total-cpu-time)
+- [Calculate task CPU cycles](#calculate-total-cpu-cycle)
+- [Calculate task Ref CPU cycles](#calculate-total-cpu-ref-cycle)
+- [Calculate task CPU instructions](#calculate-total-cpu-instr)
+- [Calculate task Cache misses](#calculate-total-cpu-cache-miss)
+- [Calculate 'On CPU Average Frequency'](#calculate-on-cpu-avg-freq)
+- [Process Table](#process-table)
+- [References](#references)
## Background
+
### What is eBPF ?
+
eBPF is a revolutionary technology with origins in the Linux kernel that can run sandboxed programs in a privileged context such as the operating system kernel. It is used to safely and efficiently extend the capabilities of the kernel without requiring to change kernel source code or load kernel modules. [1]
-### What is a kprobe?
+### What is a kprobe?
+
KProbes is a debugging mechanism for the Linux kernel which can also be used for monitoring events inside a production system. KProbes enables you to dynamically break into any kernel routine and collect debugging and performance information non-disruptively. You can trap at almost any kernel code address, specifying a handler routine to be invoked when the breakpoint is hit. [2]
#### How to list all currently registered kprobes ?
-```
+
+```bash
sudo cat /sys/kernel/debug/kprobes/list
```
### Hardware CPU Events Monitoring
+
Performance counters are special hardware registers available on most modern CPUs. These registers count the number of certain types of hw events: such as instructions executed, cache misses suffered, or branches mis-predicted -without slowing down the kernel or applications. [4]
Using syscall `perf_event_open` [5], Linux allows to set up performance monitoring for hardware and software performance. It returns a file descriptor to read performance information.
@@ -39,9 +45,10 @@ This syscall takes `pid` and `cpuid` as parameters. Kepler uses `pid == -1` and
This combination of pid and cpu allows measuring all process/threads on the specified cpu.
#### How to check if kernel supports `perf_event_open`?
+
Check presence of `/proc/sys/kernel/perf_event_paranoid` to know if kernel supports `perf_event_open` and what is allowed to be measured
-```
+```bash
The perf_event_paranoid file can be set to restrict
access to the performance counters.
@@ -57,19 +64,22 @@ Check presence of `/proc/sys/kernel/perf_event_paranoid` to know if kernel suppo
**CAP_SYS_ADMIN** is highest level of capability, it must have some security implications
## Kernel routine probed by kepler
+
Kepler traps into `finish_task_switch` kernel function [3], which is responsible for cleaning up after a task switch occurs. Since the probe is `kprobe` it is called before `finish_task_switch` is called (instead of a `kretprobe` which is called after the probed function returns).
When a context switch occurs inside the kernel, the function `finish_task_switch` is called on the new task which is going to use the CPU. This function receives an argument of type `task_struct*` which contains all the information about the task which is leaving the CPU.[3]
-The probe function in kepler is
-```
+The probe function in kepler is
+
+```c
int kprobe__finish_task_switch(struct pt_regs *ctx, struct task_struct *prev)
```
+
The first argument is of type pointer to a `pt_regs` struct which refers to the structure that holds the register state of the CPU at the time of the kernel function entry. This struct contains fields that correspond to the CPU registers, such as general-purpose registers (e.g., r0, r1, etc.), stack pointer (sp), program counter (pc), and other architectural-specific registers.
The second argument is a pointer to a `task_struct` which contains the task information for the previous task, i.e. the task which is leaving the CPU.
-
## Hardware CPU events monitored by Kepler
+
Kepler opens monitoring for following hardware cpu events
| PERF Type | Perf Count Type | Description | Array name
(in bpf program) |
@@ -79,18 +89,17 @@ Kepler opens monitoring for following hardware cpu events
| PERF_TYPE_HARDWARE | PERF_COUNT_HW_INSTRUCTIONS | Retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts. | cpu_instr_hc_reader |
| PERF_TYPE_HARDWARE | PERF_COUNT_HW_CACHE_MISSES | Cache misses. Usually this indicates Last Level Cache misses; this is intended to be used in conjunction with the PERF_COUNT_HW_CACHE_REFERENCES event to calculate cache miss rates. | cache_miss_hc_reader |
-
Performance counters are accessed via special file descriptors. There's one file descriptor per virtual counter used. The file descriptor is associated with the corresponding array. When bcc wrapper functions are used, it reads the corresponding fd, and return values.
-
## Calculate process (aka task) total CPU time
+
The ebpf program (`bpfassets/bcc/bcc.c`) maintains a mapping from a `` pair to a timestamp. The timestamp signifies the moment `kprobe__finish_task_switch` was called for pid when this pid was to be scheduled on cpu ``
-```
+```c
// => Context Switch Start time
-typedef struct pid_time_t { u32 pid; u32 cpu; } pid_time_t;
-BPF_HASH(pid_time, pid_time_t);
+typedef struct pid_time_t { u32 pid; u32 cpu; } pid_time_t;
+BPF_HASH(pid_time, pid_time_t);
// pid_time is the name of variable which if of type map
```
@@ -99,6 +108,7 @@ Within the function `get_on_cpu_time`, the difference between the current timest
This `on_cpu_time_delta` is used to accumulate the `process_run_time` metrics for the previous task.
## Calculate task CPU cycles
+
For task cpu cycles, the bpf program maintains an array named `cpu_cycles`, indexed by `cpuid`. This contains values from perf array `cpu_cycles_hc_reader`, which is a perf event type array.
On each task switch:
@@ -111,20 +121,24 @@ On each task switch:
The delta thus calculated is the cpu cycles used by the process leaving the cpu
## Calculate task Ref CPU cycles
+
Same process as calculating CPU cycles, difference being perf array used is `cpu_ref_cycles_hc_reader` and prev value is stored in `cpu_ref_cycles`
## Calculate task CPU instructions
+
Same process as calculating CPU cycles, difference being perf array used is `cpu_instr_hc_reader` and prev value is stored in `cpu_instr`
## Calculate task Cache misses
-Same process as calculating CPU cycles, difference being perf array used is `cache_miss_hc_reader` and prev value is stored in `cache_miss`
+Same process as calculating CPU cycles, difference being perf array used is `cache_miss_hc_reader` and prev value is stored in `cache_miss`
## Calculate 'On CPU Average Frequency'
-```
+
+
+```c
avg_freq = ((on_cpu_cycles_delta * CPU_REF_FREQ) / on_cpu_ref_cycles_delta) * HZ;
-CPU_REF_FREQ = 2500
+CPU_REF_FREQ = 2500
HZ = 1000
```
@@ -155,19 +169,11 @@ This hash is read by the kernel collector in `container_hc_collector.go` for met
## References
[1] [https://ebpf.io/what-is-ebpf/](https://ebpf.io/what-is-ebpf/) , [https://www.splunk.com/en_us/blog/learn/what-is-ebpf.html](https://www.splunk.com/en_us/blog/learn/what-is-ebpf.html) , [https://www.tigera.io/learn/guides/ebpf/](https://www.tigera.io/learn/guides/ebpf/)
-
- [2] [An introduction to KProbes](https://lwn.net/Articles/132196/) , [Kernel Probes (Kprobes)](https://docs.kernel.org/trace/kprobes.html)
-
- [3] [finish_task_switch - clean up after a task-switch](https://elixir.bootlin.com/linux/v6.4-rc7/source/kernel/sched/core.c#L5157)
-
- [4] [Performance Counters for Linux](https://elixir.bootlin.com/linux/latest/source/tools/perf/design.txt)
-
- [5] [perf_event_open(2) — Linux manual page](https://www.man7.org/linux/man-pages/man2/perf_event_open.2.html)
-
-
-
-
+ [2] [An introduction to KProbes](https://lwn.net/Articles/132196/) , [Kernel Probes (Kprobes)](https://docs.kernel.org/trace/kprobes.html)
+ [3] [finish_task_switch - clean up after a task-switch](https://elixir.bootlin.com/linux/v6.4-rc7/source/kernel/sched/core.c#L5157)
+ [4] [Performance Counters for Linux](https://elixir.bootlin.com/linux/latest/source/tools/perf/design.txt)
+ [5] [perf_event_open(2) — Linux manual page](https://www.man7.org/linux/man-pages/man2/perf_event_open.2.html)
diff --git a/docs/design/kepler-energy-sources.md b/docs/design/kepler-energy-sources.md
index 5c9bbdce..a68424c4 100644
--- a/docs/design/kepler-energy-sources.md
+++ b/docs/design/kepler-energy-sources.md
@@ -4,80 +4,104 @@
### RAPL - Running Average Power Limit
-Intel’s Running Average Power Limit (RAPL) is a hardware feature which allows to monitor energy consumption across different domains of the CPU chip, attached DRAM and on-chip GPU. This feature was introduced in Intel’s Sandy Bridge architecture and has evolved in the later versions of Intel’s processing architecture. With RAPL it is possible to programmatically get real time data on the power consumption of the CPU package and its components, as well as of the DRAM memory that the CPU is managing.
+Intel's Running Average Power Limit (RAPL) is a hardware feature which allows to monitor
+energy consumption across different domains of the CPU chip, attached DRAM and on-chip
+GPU. This feature was introduced in Intel's Sandy Bridge architecture and has evolved in
+the later versions of Intel's processing architecture. With RAPL it is possible to
+programmatically get real time data on the power consumption of the CPU package and its
+components, as well as of the DRAM memory that the CPU is managing.
-RAPL provides two different functionalities.
-Allows energy consumption to be measured at very fine granularity and a high sampling rate.
-Allows limiting (or capping) the average power consumption of different components inside the processor, which also limits the thermal output of the processor
+RAPL provides two different functionalities:
+
+1. Allows energy consumption to be measured at very fine granularity and a high sampling
+ rate.
+2. Allows limiting (or capping) the average power consumption of different components inside
+ the processor, which also limits the thermal output of the processor
Kepler makes use of the energy consumption measurement capability.
-RAPL supports multiple power domains. The RAPL power domain is a physically meaningful domain (e.g., Processor Package, DRAM etc) for power management. Each power domain informs the energy consumption of the domain.
+RAPL supports multiple power domains. The RAPL power domain is a physically meaningful domain
+(e.g., Processor Package, DRAM etc) for power management. Each power domain informs the energy
+consumption of the domain.
RAPL provides the following power domains for both measuring and limiting energy consumption:
- - **Package**: Package (PKG) domain measures the energy consumption of the entire socket. It includes the consumption of all the cores, integrated graphics and also the uncore components (last level caches, memory controller).
- - **Power Plane 0** (PP0) : measures the energy consumption of all processor cores on the socket.
- - **Power Plane 1** (PP1) : measures the energy consumption of processor graphics (GPU) on the socket (desktop models only).
- - **DRAM**: measures the energy consumption of random access memory (RAM) attached to the integrated memory controller.
-
-
+- **Package**: Package (PKG) domain measures the energy consumption of the entire socket. It
+ includes the consumption of all the cores, integrated graphics and also the uncore components
+ (last level caches, memory controller).
+- **Power Plane 0** (PP0) : measures the energy consumption of all processor cores on the socket.
+- **Power Plane 1** (PP1) : measures the energy consumption of processor graphics (GPU) on the
+ socket (desktop models only).
+- **DRAM**: measures the energy consumption of random access memory (RAM) attached to the integrated
+ memory controller.
The support for different power domains varies according to the processor model.
-
-
## Reading Energy values
### Using RAPL MSR (Model Specific Registers)
-The RAPL energy counters can be accessed through model-specific registers (MSRs). The counters are 32-bit registers that indicate the energy consumed since the processor was booted up. The counters are updated approximately once every millisecond. The energy is counted in multiples of model-specific energy units. Sandy Bridge uses energy units of 15.3 microjoules, whereas Haswell and Skylake uses units of 61 microjoules. The units can be read from specific MSRs before doing energy calculations.
+The RAPL energy counters can be accessed through model-specific registers (MSRs). The counters are
+32-bit registers that indicate the energy consumed since the processor was booted up. The counters
+are updated approximately once every millisecond. The energy is counted in multiples of model-specific
+energy units. Sandy Bridge uses energy units of 15.3 microjoules, whereas Haswell and Skylake uses
+units of 61 microjoules. The units can be read from specific MSRs before doing energy calculations.
-The MSRs can be accessed directly on Linux using the msr driver in the kernel. Reading RAPL domain values directly from MSRs requires detecting the CPU model and reading the RAPL energy units before reading the RAPL domain. Once the CPU model is detected, the RAPL domains can be read per package of the CPU by reading the corresponding ’MSR status’ register.
+The MSRs can be accessed directly on Linux using the msr driver in the kernel. Reading RAPL domain
+values directly from MSRs requires detecting the CPU model and reading the RAPL energy units before
+reading the RAPL domain. Once the CPU model is detected, the RAPL domains can be read per package of
+the CPU by reading the corresponding `MSR status` register.
There are basically two types of events that RAPL events report
Static Events: thermal specifications, maximum and minimum power caps, and time windows.
-Dynamic Events: RAPL domain energy readings from the chip such as PKG, PP0, PP1 or DRAM
+Dynamic Events: RAPL domain energy readings from the chip such as PKG, PP0, PP1 or DRAM
-### Using RAPL Sysfs
-From Linux Kernel version 3.13 onwards, RAPL values can be read using “Power Capping Framework” [2].
+### Using RAPL Sysfs
-Linux Power Capping framework exposes power capping devices to user space via sysfs in the form of a tree of objects.
+From Linux Kernel version 3.13 onwards, RAPL values can be read using `Power Capping Framework`[2].
-This sysfs tree is mounted at `/sys/class/powercap/intel-rapl`. When RAPL is available, this path exists and kepler reads energy values from this path.
+Linux Power Capping framework exposes power capping devices to user space via sysfs in the form of
+a tree of objects.
+
+This sysfs tree is mounted at `/sys/class/powercap/intel-rapl`. When RAPL is available, this path
+exists and kepler reads energy values from this path.
### Using kernel driver xgene-hwmon
-Using Xgene-hwmon driver kepler reads power from APM X-Gene SoC. It supports reading CPU and IO power in micro watts.
-### Using eBpf perf events
+Using Xgene-hwmon driver kepler reads power from APM X-Gene SoC. It supports reading CPU and IO
+power in micro watts.
+
+### Using eBpf perf events
+
Not used in kepler
### Using PAPI library
-Performance Application Programming Interface (PAPI)
-Not used in kepler
+Performance Application Programming Interface (PAPI)
+Not used in kepler
Kepler chooses to use one enenry sources in the following order of preference:
+
1. Sysfs
2. MSR
3. Hwmon
-
## Permissions required
+
### MSRs
+
Root access is required to use the msr driver
### Sysfs (powercap)
-Root access is required to use powercap driver
-
-
+Root access is required to use powercap driver
## References
- [1] [RAPL in Action: Experiences in Using RAPL for Power Measurements](https://helda.helsinki.fi/server/api/core/bitstreams/bdc6c9a5-74d4-494b-ae83-860625a665ce/content)
- [2] [RA Power Capping Framework](https://www.kernel.org/doc/html/next/power/powercap/powercap.html)
+[1] [RAPL in Action: Experiences in Using RAPL for Power Measurements](https://helda.helsinki.fi/server/api/core/bitstreams/bdc6c9a5-74d4-494b-ae83-860625a665ce/content)
+
+[2] [RA Power Capping Framework](https://www.kernel.org/doc/html/next/power/powercap/powercap.html)
- [3] [RA Kernel driver xgene-hwmon](https://docs.kernel.org/hwmon/xgene-hwmon.html)
+[3] [RA Kernel driver xgene-hwmon](https://docs.kernel.org/hwmon/xgene-hwmon.html)
- [4] [RA Performance Application Programming Interface (PAPI)](https://icl.utk.edu/papi/)
+[4] [RA Performance Application Programming Interface (PAPI)](https://icl.utk.edu/papi/)
diff --git a/docs/design/metrics.md b/docs/design/metrics.md
index cbeb79e1..8d83ca6f 100644
--- a/docs/design/metrics.md
+++ b/docs/design/metrics.md
@@ -1,18 +1,16 @@
# Monitoring Container Power Consumption with Kepler
Kepler Exporter exposes statistics from an application running in a Kubernetes cluster in a Prometheus-friendly format that can be
-scraped by any database that understands this format, such as `Prometheus`_ and `Sysdig`_.
-
-Kepler exports a variety of container metrics to Prometheus, where the main ones are those related
-to energy consumption.
+scraped by any database that understands this format, such as [Prometheus][0] and [Sysdig][1].
+Kepler exports a variety of container metrics to Prometheus, where the main ones are those related
+to energy consumption.
## Kepler metrics overview
-All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
-
+All the metrics specific to the Kepler Exporter are prefixed with `kepler`.
-## Kepler metrics for Container Energy Consumption:
+## Kepler metrics for Container Energy Consumption
- **kepler_container_joules_total** (Counter)
This metric is the aggregated package/socket energy consumption of CPU, dram, gpus, and other host components for a given container.
@@ -23,7 +21,7 @@ All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
- **kepler_container_core_joules_total** (Counter)
This measures the total energy consumption on CPU cores that a certain container has used.
- Generally, when the system has access to `RAPL`_ metrics, this metric will reflect the proportional container energy consumption of the RAPL
+ Generally, when the system has access to [RAPL][3] metrics, this metric will reflect the proportional container energy consumption of the RAPL
Power Plan 0 (PP0), which is the energy consumed by all CPU cores in the socket.
However, this metric is processor model specific and may not be available on some server CPUs.
The RAPL CPU metric that is available on all processors that support RAPL is the package, which we will detail
@@ -32,7 +30,7 @@ All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
In some cases where RAPL is available but core metrics are not, Kepler may use the energy consumption package.
But note that package energy consumption is not just from CPU cores, it is all socket energy consumption.
- In case `RAPL`_ is not available, kepler might estimate this metric using the model server.
+ In case [RAPL][3] is not available, kepler might estimate this metric using the model server.
- **kepler_container_dram_joules_total** (Counter)
This metric describes the total energy spent in DRAM by a container.
@@ -42,7 +40,7 @@ All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
integrated GPU and memory controller, but the number of components may vary depending on the system.
The uncore metric is processor model specific and may not be available on some server CPUs.
- When `RAPL`_ is not available, kepler can estimate this metric using the model server if the node CPU supports the uncore metric.
+ When [RAPL][3] is not available, kepler can estimate this metric using the model server if the node CPU supports the uncore metric.
- **kepler_container_package_joules_total** (Counter)
This measures the cumulative energy consumed by the CPU socket, including all cores and uncore components (e.g.
@@ -50,7 +48,7 @@ All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
RAPL package energy is typically the PP0 + PP1, but PP1 counter may or may not account for all energy usage
by uncore components. Therefore, package energy consumption may be higher than core + uncore.
- When `RAPL`_ is not available, kepler might estimate this metric using the model server.
+ When [RAPL][3] is not available, kepler might estimate this metric using the model server.
- **kepler_container_other_joules_total** (Counter)
This measures the cumulative energy consumption on other host components besides the CPU and DRAM.
@@ -76,7 +74,7 @@ All the metrics specific to the Kepler Exporter are prefixed with `kepler_`.
Note:
"system_process" is a special indicator that aggregate all the non-container workload into system process consumption metric.
-## Kepler metrics for Container resource utilization:
+## Kepler metrics for Container resource utilization
### Base metric
@@ -85,7 +83,7 @@ Note:
### Hardware counter metrics
-- **kepler_container_cpu_cycles_total**
+- **kepler_container_cpu_cycles_total**
This measures the total CPU cycles used by the container using hardware counters.
To support fine-grained analysis of performance and resource utilization, hardware counters are particularly desirable
due to its granularity and precision..
@@ -93,13 +91,13 @@ Note:
The CPU cycles is a metric directly related to CPU frequency.
On systems where processors run at a fixed frequency, CPU cycles and total CPU time are roughly equivalent.
On systems where processors run at varying frequencies, CPU cycles and total CPU time will have different values.
-
-- **kepler_container_cpu_instructions_total**
+
+- **kepler_container_cpu_instructions_total**
This measure the total cpu instructions used by the container using hardware counters.
CPU instructions are the de facto metric for accounting for CPU utilization.
-- **kepler_container_cache_miss_total**
+- **kepler_container_cache_miss_total**
This measures the total cache miss that has occurred for a given container using hardware counters.
As there is no event counter that measures memory access directly, the number of last-level cache misses gives
@@ -112,11 +110,11 @@ Note:
### cGroups metrics
- **kepler_container_cgroupfs_cpu_usage_us_total**
- This measures the total CPU time used by the container reading from cGroups stat.
+ This measures the total CPU time used by the container reading from cGroups stat.
- **kepler_container_cgroupfs_memory_usage_bytes_total**
- This measures the total memory in bytes used by the container reading from cGroups stat.
+ This measures the total memory in bytes used by the container reading from cGroups stat.
- **kepler_container_cgroupfs_system_cpu_usage_us_total**
- This measures the total CPU time in kernelspace used by the container reading from cGroups stat.
+ This measures the total CPU time in kernel space used by the container reading from cGroups stat.
- **kepler_container_cgroupfs_user_cpu_usage_us_total**
This measures the total CPU time in userspace used by the container reading from cGroups stat.
@@ -132,12 +130,12 @@ Note:
Note:
You can enable/disable expose of those metrics through `EXPOSE_IRQ_COUNTER_METRICS` environment value.
-## Kepler metrics for Node information:
+## Kepler metrics for Node information
- **kepler_node_info** (Counter)
This metric shows the node metadata like the node CPU architecture.
-## Kepler metrics for Node energy consumption:
+## Kepler metrics for Node energy consumption
- **kepler_node_core_joules_total** (Counter)
Similar to container metrics, but representing the aggregation of all containers running on the node and operating system (i.e. "system_process").
@@ -173,7 +171,7 @@ Note:
This metric is specific to the model server and can be updated at any time.
-## Kepler metrics for Node resource utilization:
+## Kepler metrics for Node resource utilization
### Accelerator metrics
@@ -181,22 +179,20 @@ Note:
## Exploring Node Exporter metrics through the Prometheus expression
-All the energy consumption metrics are defined as counter following the `Prometheus metrics guide `_ for energy related metrics.
+All the energy consumption metrics are defined as counter following the [Prometheus metrics guide](https://prometheus.io/docs/practices/naming/) for energy related metrics.
The `rate()` of joules gives the power in Watts since the rate function returns the average per second.
Therefore, for get the container energy consumption you can use the following query:
-
-`sum by (pod_name, container_name, container_namespace, node) (`
- `irate(kepler_container_joules_total{}[1m])`
-`)`
-
+```go
+sum by (pod_name, container_name, container_namespace, node)(irate(kepler_container_joules_total{}[1m]))
+```
Note that we report the node label in the container metrics because the OS metrics "system_process" will have the same name and namespace across all nodes and we do not want to aggregate them.
## RAPL power domain
-`RAPL power domains supported `_ in some
+[RAPL power domains supported](https://zhenkai-zhang.github.io/papers/rapl.pdf) in some
resent Intel microarchitecture (consumer-grade/server-grade):
| Microarchitecture | Package | CORE (PP0) | UNCORE (PP1) | DRAM |
@@ -206,8 +202,6 @@ resent Intel microarchitecture (consumer-grade/server-grade):
| Skylake | Y/Y | Y/Y | Y/**N** | Y/Y |
| Kaby Lake | Y/Y | Y/Y | Y/**N** | Y/Y |
-.. _Prometheus: https://prometheus.io
-
-.. _Sysdig: https://sysdig.com/
-
-.. _RAPL: https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/running-average-power-limit-energy-reporting.html
+[0]:
+[1]:
+[3]:
diff --git a/docs/design/power_model.md b/docs/design/power_model.md
index 4aac2308..919c2add 100644
--- a/docs/design/power_model.md
+++ b/docs/design/power_model.md
@@ -1,15 +1,28 @@
# Kepler Power Model
-In Kepler, with respective to available measurements, we provide a pod-level power with a mix of two power modeling approaches:
+In Kepler, with respective to available measurements, we provide a pod-level power with a mix of two
+power modeling approaches:
## Modeling Approach
-- **Power Ratio Modeling**: This modeling computes a finer-grained power by the usage ratio over the total summation of power. This modeling is used by default when the total power is known.
-- **Power Estimation Modeling**: This modeling estimates a power by using usage metrics as input features of the trained model. This modeling can be used even if the power metric cannot be measured. The estimation can be done in three levels: Node total power (including fan, power supply, etc.), Node internal component powers (such as CPU, Memory), Pod power.
+- **Power Ratio Modeling**: This modeling computes a finer-grained power by the usage ratio over the
+ total summation of power. This modeling is used by default when the total power is known.
- also see [Get started with Kepler Model Server](../kepler_model_server/get_started.md)
+- **Power Estimation Modeling**: This modeling estimates a power by using usage metrics as input features
+of the trained model. This modeling can be used even if the power metric cannot be measured. The estimation
+can be done in three levels: Node total power (including fan, power supply, etc.), Node internal component
+powers (such as CPU, Memory), Pod power.
-- **Pre-trained Power Models**: We provide pre-trained power models for different deployment scenarios. Current x86_64 pretrained model are developed in [Intel® Xeon® Processor E5-2667 v3](https://github.com/sustainable-computing-io/kepler-model-db/tree/main/models). Models with other architectures are coming soon. You can find these models in [Kepler Model DB](https://github.com/sustainable-computing-io/kepler-model-db/tree/main/models/v0.6/nx12). These models support both power ratio modeling and power estimation modeling for both RAPL and ACPI power sources. The `AbsPower` models estimate both idle and dynamic power while the `DynPower` models only estimate dynamic power. The MAE (mean absolute error) of these models are also published. Kepler container image has preloaded [acpi/AbsPower/BPFOnly/SGDRegressorTrainer_1.json](https://github.com/sustainable-computing-io/kepler-model-db/blob/main/models/v0.6/nx12/std_v0.6/acpi/AbsPower/BPFOnly/SGDRegressorTrainer_1.json) model for node energy estimate and [rapl/AbsPower/BPFOnly/SGDRegressorTrainer_1.json](https://github.com/sustainable-computing-io/kepler-model-db/blob/main/models/v0.6/nx12/std_v0.6/rapl/AbsPower/BPFOnly/SGDRegressorTrainer_1.json) for Container absolute power estimate.
+ > **Note**: Also see [Get started with Kepler Model Server](../kepler_model_server/get_started.md)
+
+- **Pre-trained Power Models**: We provide pre-trained power models for different deployment scenarios.
+ Current x86_64 pre-trained model are developed in [Intel® Xeon® Processor E5-2667 v3][1]. Models with
+ other architectures are coming soon. You can find these models in [Kepler Model DB][2]. These models
+ support both power ratio modeling and power estimation modeling for both RAPL and ACPI power sources.
+ The `AbsPower` models estimate both idle and dynamic power while the `DynPower` models only estimate
+ dynamic power. The MAE (Mean Absolute Error) of these models are also published. Kepler container
+ image has preloaded [acpi/AbsPower/BPFOnly/SGDRegressorTrainer_1.json][3] model for node energy estimate
+ and [rapl/AbsPower/BPFOnly/SGDRegressorTrainer_1.json][4] for Container absolute power estimate.
## Usage Scenario
@@ -23,5 +36,9 @@ VM with node info and power passthrough from BM (x86 with power meter)|Measureme
VM with node info and power passthrough from BM (x86 but no power meter)|Power Estimation|Measurement + VM Mapping|Power Ratio
VM with node info and power passthrough from BM (non-x86 with power meter)|Measurement + VM Mapping|Power Estimation|Power Ratio
VM with node info|Power Estimation|Power Estimation|Power Ratio
-Pure VM|\-|\-|Power Estimation
-|||
\ No newline at end of file
+Pure VM|-|-|Power Estimation
+
+[1]:https://github.com/sustainable-computing-io/kepler-model-db/tree/main/models
+[2]:https://github.com/sustainable-computing-io/kepler-model-db/tree/main/models/v0.6/nx12
+[3]:https://github.com/sustainable-computing-io/kepler-model-db/blob/main/models/v0.6/nx12/std_v0.6/acpi/AbsPower/BPFOnly/SGDRegressorTrainer_1.json
+[4]:https://github.com/sustainable-computing-io/kepler-model-db/blob/main/models/v0.6/nx12/std_v0.6/rapl/AbsPower/BPFOnly/SGDRegressorTrainer_1.json
diff --git a/docs/hardwareengagement/index.md b/docs/hardwareengagement/index.md
index 08e5f1c1..629a9532 100644
--- a/docs/hardwareengagement/index.md
+++ b/docs/hardwareengagement/index.md
@@ -4,31 +4,41 @@ In this document, we will share our steps as how to engagement kepler with a spe
You are able to take this out as a todo list and step by step to make kepler engage with your own hardware device.
## Stage 0 Proof
+
In this Stage, we will focus on basic data can be collected by golang, and you can build your own kepler which running on your device well. The following steps can running in parallel.
-### Binary build and container build.
+
+### Binary build and container build
+
Currently kepler container image is from a GPU image to support GPU case. Considering a general case for IOT device. You may need to build kepler from UBI image. We recommend you following steps below to setup a local build env and try to build.
1. Find a linux OS.
-1. Install kepler dependencies as ebpf golang(BCC), linux header and build kepler(from main branch or latest release branch) binary.
+1. Install kepler dependencies as eBPF golang(BCC), linux header and build kepler(from main branch or latest release branch) binary.
1. (Optional)Modify [dockerfile](https://github.com/sustainable-computing-io/kepler/tree/main/build) to build the container image.
-### Power consumption API.
+### Power consumption API
+
Currently, we use power consumption API as RAPL or ACPI. For some of the devices, you may need to find your own way to get power consumption, and implement in golang for kepler usage. For further plan, please ref [here](https://github.com/sustainable-computing-io/kepler/issues/644)
-### ebpf/cgroup data.
-Currently, we relays on ebpf and cgroup to characterization a process/pod. Hence, you can ref to our dependency as BCC or cgroup. To test those golang package works well on your device.
+### eBPF/cgroup data
+
+Currently, we relays on eBPF and cgroup to characterization a process/pod. Hence, you can ref to our dependency as BCC or cgroup. To test those golang package works well on your device.
## Stage 1 Integration with ratio
+
During this Stage, we are going to ref Kepler model. To integrate and implement your own logic specific to your device and deep dive into Power consumption API.
### Scope
+
You should know the scope of the Power consumption API. How many API do you have? Is it categorized by CPU/memory/IO or not?
### Interval
-You should know the intervals of the Power consumption API. As kepler collect ebpf and cgroup data in each 3s by default, you should know the interval and make them in same time slot.
+
+You should know the intervals of the Power consumption API. As kepler collect eBPF and cgroup data in each 3s by default, you should know the interval and make them in same time slot.
### Verify
+
You can cross check and verify the data.
## Stage 2 Model_training
-tbd
+
+TBD
diff --git a/docs/index.md b/docs/index.md
index b56dc3cd..5cda3176 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -8,10 +8,11 @@ Check out the project on GitHub ➡️ [Kepler](https://github.com/sustainable-c
For a comprehensive overview, please check out ➡️ [this CNCF blog article](https://www.cncf.io/blog/2023/10/11/exploring-keplers-potentials-unveiling-cloud-application-power-consumption/).
-
+
We are a Cloud Native Computing Foundation sandbox project.
-
\ No newline at end of file
+
+
diff --git a/docs/installation/community-operator.md b/docs/installation/community-operator.md
index f2751748..e2bdf09c 100644
--- a/docs/installation/community-operator.md
+++ b/docs/installation/community-operator.md
@@ -79,16 +79,16 @@ operatorcondition.operators.coreos.com/kepler-operator.v0.8.1 12h
- Go to Operators ❯ Operator Hub. Search for `Kepler`.
Click on Kepler Operator tile, then select `Continue` and then `Install`
-![](../fig/ocp_installation/operator_installation_ocp_1_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_1_0.8.z.png)
- Choose `alpha` or `dev-preview` channel for deploying the `latest` or the `developer preview` version of the Operator respectively.
- Click on `Install`
-![](../fig/ocp_installation/operator_installation_ocp_2_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_2_0.8.z.png)
- Wait until Operator gets installed
-![](../fig/ocp_installation/operator_installation_ocp_3_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_3_0.8.z.png)
Follow the link to view installed Operators in `openshift-operators` Namespace
or use the UI to navigate to installed operators and select the Kepler
@@ -96,20 +96,20 @@ Operator.
- Select `Create instance` to Create a Custom Resource for Kepler
-![](../fig/ocp_installation/operator_installation_ocp_4_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_4_0.8.z.png)
- There is a `Form` and `YAML` view, using the **YAML** view
provides more detail.
-![](../fig/ocp_installation/operator_installation_ocp_5a_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_5a_0.8.z.png)
-![](../fig/ocp_installation/operator_installation_ocp_5b_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_5b_0.8.z.png)
- Once Kepler is configured select `Create`.
-* Check that the Availability status of Kepler Instance should be `True`
+- Check that the Availability status of Kepler Instance should be `True`
-![](../fig/ocp_installation/operator_installation_ocp_6_0.8.z.png)
+![Operator installation in OCP](../fig/ocp_installation/operator_installation_ocp_6_0.8.z.png)
- Check that the Kepler is deployed and available
@@ -136,9 +136,9 @@ To view the metrics directly from OpenShift Console
- Configure user workload monitoring on the cluster. Refer to the official OpenShift [documentation](https://docs.openshift.com/container-platform/latest/monitoring/enabling-monitoring-for-user-defined-projects.html) for more information.
- Navigate to Observe ❯ Dashboard
- To view overall power consumption select `Power Monitoring / Overview` from dropdown.
- ![](../fig/ocp_installation/operator_installation_ocp_7_0.8.z.png)
+ ![Operator installation](../fig/ocp_installation/operator_installation_ocp_7_0.8.z.png)
- To view the power consumption by namespace select `Power Monitoring / Namespace` from dropdown.
- ![](../fig/ocp_installation/operator_installation_ocp_8_0.8.z.png)
+ ![Operator installation](../fig/ocp_installation/operator_installation_ocp_8_0.8.z.png)
### Deploy the Grafana Dashboard
@@ -182,19 +182,19 @@ When the script successfully completes it provides the OpenShift Route to the Ke
Sign in to the Grafana dashboard using the credentials `kepler:kepler`.
-![](../fig/ocp_installation/operator_installation_ocp_9_0.8.z.png)
+![Operator installation](../fig/ocp_installation/operator_installation_ocp_9_0.8.z.png)
#### Access the Grafana Console Route
The dashboard can also be accessed through the OCP UI, Go to Networking ❯ Routes.
-![](../fig/ocp_installation/operator_installation_ocp_10_0.8.z.png)
+![Operator installation](../fig/ocp_installation/operator_installation_ocp_10_0.8.z.png)
#### Grafana Deployment Overview
Refer to the [Grafana Deployment Overview](https://github.com/sustainable-computing-io/kepler-operator/blob/v1alpha1/docs/developer/assets/grafana-deployment-overview.png)
-![](https://github.com/sustainable-computing-io/kepler-operator/blob/v1alpha1/docs/developer/assets/grafana-deployment-overview.png)
+![Grafana deployment overview](https://github.com/sustainable-computing-io/kepler-operator/blob/v1alpha1/docs/developer/assets/grafana-deployment-overview.png)
---
@@ -208,7 +208,8 @@ Kubernetes that is installed e.g. `v1.25`.
### How do I set nodeSelector and tolerations for Kepler?
-You can specify **nodeSelector** and **toleration's** for Kepler at the time of creating Instance. You can specify both in `Form` and `YAML` view.
+You can specify **nodeSelector** and **toleration's** for Kepler at the time of creating Instance.
+You can specify both in `Form` and `YAML` view.
- To specify in `YAML` view:
diff --git a/docs/installation/kepler-helm.md b/docs/installation/kepler-helm.md
index a2ed6f29..43a8683a 100644
--- a/docs/installation/kepler-helm.md
+++ b/docs/installation/kepler-helm.md
@@ -6,6 +6,7 @@ and [ArtifactHub](https://artifacthub.io/packages/helm/kepler/kepler)
## Install Helm
+
[Helm](https://helm.sh) must be installed to use the charts.
Please refer to Helm's [documentation](https://helm.sh/docs/) to get started.
@@ -29,6 +30,7 @@ helm install prometheus prometheus-community/kube-prometheus-stack \
--wait
```
+
## Add the Kepler Helm repo
```bash
@@ -111,6 +113,7 @@ kubectl cp kepler_dashboard.json monitoring/$GF_POD:/tmp/dashboards/kepler_dashb
```
## Uninstall Kepler
+
To uninstall this chart, use the following steps
```bash
diff --git a/docs/installation/kepler-operator.md b/docs/installation/kepler-operator.md
index af1536d5..d479a9f7 100644
--- a/docs/installation/kepler-operator.md
+++ b/docs/installation/kepler-operator.md
@@ -45,15 +45,15 @@ To access the Grafana Console locally on the browser port-forward on 3000 using
kubectl port-forward svc/grafana 3000:3000 -n monitoring
```
-> Note: Grafana Console can be accessed on [http://localhost:3000](http://localhost:3000)
+> **Note**: Grafana Console can be accessed on [http://localhost:3000](http://localhost:3000)
### Service Monitor
For `kube-prometheus` to scrape `kepler-exporter` service endpoint you need to configure a service monitor.
-> Note: By default `kube-prometheus` does not let you scrape services deployed in namespaces other than `monitoring`. So if you are running Kepler outside `monitoring` [follow this to set up Prometheus to scrape all namespaces](#scrape-all-namespaces).
+> **Note**: By default `kube-prometheus` does not let you scrape services deployed in namespaces other than `monitoring`. So if you are running Kepler outside `monitoring` [follow this to set up Prometheus to scrape all namespaces](#scrape-all-namespaces).
-```
+```cmd
kubectl apply -n monitoring -f - << EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
@@ -93,7 +93,7 @@ To set up the Grafana dashboard follow these steps:
- Sign in [localhost:3000](http:localhost:3000) using `admin:admin`
- Import default [dashboard](https://raw.githubusercontent.com/sustainable-computing-io/kepler-operator/v1alpha1/hack/dashboard/assets/kepler/dashboard.json) from Kepler operator repository
-![](../fig/ocp_installation/kind_grafana.png)
+![kind-grafana](../fig/ocp_installation/kind_grafana.png)
## Uninstall the operator
diff --git a/docs/installation/kepler-rpm.md b/docs/installation/kepler-rpm.md
index 4dccabbf..62753800 100644
--- a/docs/installation/kepler-rpm.md
+++ b/docs/installation/kepler-rpm.md
@@ -1,4 +1,5 @@
# Install Kepler as RPM
+
To install the kepler rpm [download](https://github.com/sustainable-computing-io/kepler/releases/) the latest stable version, unpack and install:
```sh
@@ -16,10 +17,12 @@ journalctl -f | grep kepler
```
In order to do process-level energy accounting type:
+
```sh
mkdir -p /etc/kepler/kepler.config
echo -n true > /etc/kepler/kepler.config/ENABLE_PROCESS_METRICS
```
+
The kepler service runs on default port 8888.
-Use your web browser to navigate to the machine IP on port 8888.
\ No newline at end of file
+Use your web browser to navigate to the machine IP on port 8888.
diff --git a/docs/installation/kepler.md b/docs/installation/kepler.md
index fd69ed44..275f2d53 100644
--- a/docs/installation/kepler.md
+++ b/docs/installation/kepler.md
@@ -4,17 +4,16 @@
Before you deploy kepler make sure:
-- you have a Kubernetes cluster running. If you want to do local cluster set up [follow this](./local-cluster.md#install-kind)
+- you have a Kubernetes cluster running. If you want to do local cluster set up [follow this](./local-cluster.md#install-kind)
- the Monitoring stack, i.e. Prometheus with Grafana is set up. [Steps here](#deploy-the-prometheus-operator)
->The default Grafana deployment can be accessed with the credentials `admin:admin`. You can expose the web-based UI locally using:
+> **Note**: The default Grafana deployment can be accessed with the credentials `admin:admin`. You can expose the web-based UI locally using:
```sh
kubectl -n monitoring port-forward svc/grafana 3000
```
-
-#### Running Kepler on a local kind cluster
+### Running Kepler on a local kind cluster
To run Kepler on `kind`, we need to build it locally with specific flags. The full details of local builds are covered in the [section below](#build-manifests). To deploy on a local `kind` cluster, you need to use the `CI_DEPLOY` and `PROMETHEUS_DEPLOY` flags.
@@ -27,13 +26,12 @@ kubectl apply -f _output/generated-manifest/deployment.yaml
The following deployment will also create a service listening on port `9102`.
->If you followed the Kepler dashboard deployment steps, you can access the Kepler dashboard by navigating to [http://localhost:3000/](http://localhost:3000/) Login using `admin:admin`. Skip the window where Grafana asks to input a new password.
-
+> **Note**: If you followed the Kepler dashboard deployment steps, you can access the Kepler dashboard by navigating to [http://localhost:3000/](http://localhost:3000/) Login using `admin:admin`. Skip the window where Grafana asks to input a new password.
-![](../fig/grafana_dashboard.png)
+![Grafana dashboard](../fig/grafana_dashboard.png)
+### Build manifests
-#### Build manifests
First, fork the [kepler](https://github.com/sustainable-computing-io/kepler) repository and clone it.
If you want to use Redfish BMC and IPMI, you need to add Redfish and IPMI credentials of each of the kubelet node to the `redfish.csv` under the `kepler/manifests/config/exporter` directory. The format of the file is as follows:
@@ -49,19 +47,19 @@ where, `kubelet_node_name` in the first column is the name of the node where the
kubectl get nodes
```
-`redfish_username` and `redfish_password` in the second and third columns are the credentials to access the Redfish API from each node.
+`redfish_username` and `redfish_password` in the second and third columns are the credentials to access the Redfish API from each node.
While `https://redfish_ip_or_hostname` in the fourth column is the Redfish endpoint in IP address or hostname.
-
Then, build the manifests file that suit your environment and deploy it with the following steps:
```bash
make build-manifest OPTS=""
-# minimum deployment:
+# minimum deployment:
# > make build-manifest
-# deployment with sidecar on openshift:
+# deployment with sidecar on openshift:
# > make build-manifest OPTS="ESTIMATOR_SIDECAR_DEPLOY OPENSHIFT_DEPLOY"
```
+
Manifests will be generated in `_output/generated-manifest/` by default.
Deployment Option|Description|Dependency
@@ -87,11 +85,10 @@ REDFISH_PROBE_INTERVAL_IN_SECONDS|60|Interval in seconds to get power consumptio
REDFISH_SKIP_SSL_VERIFY|true|`true` if TLS verification is disabled on connecting to Redfish endpoint.
`build-manifest` requirements:
-- kubectl v1.21+
-- make
-- go
-
+- kubectl v1.21+
+- make
+- go
## Deploy the Prometheus operator
@@ -99,40 +96,44 @@ If Prometheus is already installed in the cluster, skip this step. Otherwise, fo
1. Clone the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project to your local folder, and enter the `kube-prometheus` directory.
-```sh
-git clone --depth 1 https://github.com/prometheus-operator/kube-prometheus
-cd kube-prometheus
-```
-
-2. This step is optional. You can later manually add the [Kepler Grafana dashboard](https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json) through the Grafana UI. To automatically do that, fetch the `kepler-exporter` Grafana dashboard and inject in the Prometheus Grafana deployment. This step uses [yq](https://github.com/mikefarah/yq), a YAML processor:
-
-```sh
-KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON=`curl -fsSL https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json | sed '1 ! s/^/ /'`
-mkdir -p grafana-dashboards
-cat - > ./grafana-dashboards/kepler-exporter-configmap.yaml << EOF
-apiVersion: v1
-data:
- kepler-exporter.json: |-
- $KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON
-kind: ConfigMap
-metadata:
- labels:
- app.kubernetes.io/component: grafana
- app.kubernetes.io/name: grafana
- app.kubernetes.io/part-of: kube-prometheus
- app.kubernetes.io/version: 9.5.3
- name: grafana-dashboard-kepler-exporter
- namespace: monitoring
-EOF
-yq -i e '.items += [load("./grafana-dashboards/kepler-exporter-configmap.yaml")]' ./manifests/grafana-dashboardDefinitions.yaml
-yq -i e '.spec.template.spec.containers.0.volumeMounts += [ {"mountPath": "/grafana-dashboard-definitions/0/kepler-exporter", "name": "grafana-dashboard-kepler-exporter", "readOnly": false} ]' ./manifests/grafana-deployment.yaml
-yq -i e '.spec.template.spec.volumes += [ {"configMap": {"name": "grafana-dashboard-kepler-exporter"}, "name": "grafana-dashboard-kepler-exporter"} ]' ./manifests/grafana-deployment.yaml
-```
+ ```sh
+ git clone --depth 1 https://github.com/prometheus-operator/kube-prometheus
+ cd kube-prometheus
+ ```
+
+2. This step is optional. You can later manually add the [Kepler Grafana dashboard][1] through the
+ Grafana UI. To automatically do that, fetch the `kepler-exporter` Grafana dashboard and inject in
+ the Prometheus Grafana deployment. This step uses [yq](https://github.com/mikefarah/yq), a YAML processor:
+
+ ```sh
+ KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON=`curl -fsSL https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json | sed '1 ! s/^/ /'`
+ mkdir -p grafana-dashboards
+ cat - > ./grafana-dashboards/kepler-exporter-configmap.yaml << EOF
+ apiVersion: v1
+ data:
+ kepler-exporter.json: |-
+ $KEPLER_EXPORTER_GRAFANA_DASHBOARD_JSON
+ kind: ConfigMap
+ metadata:
+ labels:
+ app.kubernetes.io/component: grafana
+ app.kubernetes.io/name: grafana
+ app.kubernetes.io/part-of: kube-prometheus
+ app.kubernetes.io/version: 9.5.3
+ name: grafana-dashboard-kepler-exporter
+ namespace: monitoring
+ EOF
+ yq -i e '.items += [load("./grafana-dashboards/kepler-exporter-configmap.yaml")]' ./manifests/grafana-dashboardDefinitions.yaml
+ yq -i e '.spec.template.spec.containers.0.volumeMounts += [ {"mountPath": "/grafana-dashboard-definitions/0/kepler-exporter", "name": "grafana-dashboard-kepler-exporter", "readOnly": false} ]' ./manifests/grafana-deployment.yaml
+ yq -i e '.spec.template.spec.volumes += [ {"configMap": {"name": "grafana-dashboard-kepler-exporter"}, "name": "grafana-dashboard-kepler-exporter"} ]' ./manifests/grafana-deployment.yaml
+ ```
3. Finally, apply the objects in the `manifests` directory. This will create the `monitoring` namespace and CRDs, and then wait for them to be available before creating the remaining resources. During the `until` loop, a response of `No resources found` is to be expected. This statement checks whether the resource API is created but doesn't expect the resources to be there.
-```sh
-kubectl apply --server-side -f manifests/setup
-until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
-kubectl apply -f manifests/
-```
+ ```sh
+ kubectl apply --server-side -f manifests/setup
+ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
+ kubectl apply -f manifests/
+ ```
+
+[1]:https://raw.githubusercontent.com/sustainable-computing-io/kepler/main/grafana-dashboards/Kepler-Exporter.json
diff --git a/docs/installation/local-cluster.md b/docs/installation/local-cluster.md
index 7a877786..9c5de17f 100644
--- a/docs/installation/local-cluster.md
+++ b/docs/installation/local-cluster.md
@@ -4,9 +4,12 @@ Kepler runs on Kubernetes. If you already have access to a cluster, you can skip
## Install kind
-To install `kind`, please [see the instructions here](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
+To install `kind`, please [see the instructions here](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
-We need to configure our cluster to run Kepler. Specifically, we need to mount `/proc` (to expose information about processes running on the host) and `/usr/src` (to expose kernel headers allowing dynamic eBPF program compilation - this dependency [might be removed in future releases](https://github.com/sustainable-computing-io/kepler/issues/716)) into the node containers. Below is a minimal single-node example configuration:
+We need to configure our cluster to run Kepler. Specifically, we need to mount `/proc` (to expose
+information about processes running on the host) and `/usr/src` (to expose kernel headers allowing
+dynamic eBPF program compilation - this dependency [might be removed in future releases][1] into the
+node containers. Below is a minimal single-node example configuration:
```yaml
# ./local-cluster-config.yaml
@@ -30,4 +33,6 @@ export $CLUSTER_NAME="my-cluster" # we can use the --name flag to override the
kind create cluster --name=$CLUSTER_NAME --config=./local-cluster-config.yaml
```
-Note that `kind` automatically switches your current `kubeconfig` context to the newly created cluster.
\ No newline at end of file
+Note that `kind` automatically switches your current `kubeconfig` context to the newly created cluster.
+
+[1]: https://github.com/sustainable-computing-io/kepler/issues/716
diff --git a/docs/installation/strategy.md b/docs/installation/strategy.md
index e6d078af..820ea690 100644
--- a/docs/installation/strategy.md
+++ b/docs/installation/strategy.md
@@ -4,12 +4,9 @@ While you are free to explore any deployments but the recommended strategies are
| *OCP 4.13* | *Microshift* | *RHEL* | *ROSA* | *Kind* |
| ------------- | ------------- | ----- | ----- | ----|
-| kepler-operator | Manifests | RPM | Manifests | Helm Charts, Manifests, kepler-operator
+| kepler-operator | Manifests | RPM | Manifests | Helm Charts, Manifests, kepler-operator|
## Requirements
+
- Kernel 4.18+
- `kubectl` v1.21.0+
-
-
-
-
diff --git a/docs/kepler_model_server/api.md b/docs/kepler_model_server/api.md
index 07f55410..e6458d4e 100644
--- a/docs/kepler_model_server/api.md
+++ b/docs/kepler_model_server/api.md
@@ -1,76 +1,87 @@
# Kepler Model Server API
+
## Getting Powers from Estimator
+
**module:** estimator (src/estimate/estimator.py)
-```
+
+```sh
/tmp/estimator.socket
```
-Parameters of [PowerRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/estimate/estimator.py)
+
+### Parameters of [PowerRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/estimate/estimator.py)
|key|value|description
-|---|---|---|
+|---|-----|-----------
|metrics|list of string|list of available input features (measured metrics)
-|output_type|either of the following values: *AbsPower* (for node-level power model), *DynPower* (for container-level power model)|the requested model type
-|trainer_name (optional)|string|filter model with trainer name.
-|filter (optional)|string|expression in the form *attribute1*:*threshold1*; *attribute2*:*threshold2*.
+|output_type|either of the following values: *AbsPower* (for node-level power model), *DynPower* (for container-level power model)|the requested model type
+|trainer_name (optional)|string|filter model with trainer name
+|filter (optional)|string|expression in the form *attribute1*:*threshold1*; *attribute2*:*threshold2*
+
+## Getting Power Models from Model Server
-## Getting Power Models from Model Server
**module:** server (src/server/model_server.py)
-```
+
+```sh
:8100/model
POST
```
-Parameters of [ModelRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/server/model_server.py)
+### Parameters of [ModelRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/server/model_server.py)
|key|value|description
-|---|---|---|
+|---|-----|:------------
|metrics|list of string|list of available input features (measured metrics)
-|output_type|either of the following values: *AbsPower* (for node-level power model), *DynPower* (for container-level power model)|the requested model type
+|output_type|either of the following values: *AbsPower* (for node-level power model), *DynPower* (for container-level power model)|the requested model type
|weight|boolean|return model weights in json format if true. Otherwise, return model in zip file format.
|trainer_name (optional)|string|filter model with trainer name.
|node_type (optional)|string|filter model with node type.
|filter (optional)|string|expression in the form *attribute1*:*threshold1*; *attribute2*:*threshold2*.
## Offline Trainer
+
**module:** offline trainer (src/train/offline_trainer.py)
-```
+
+```sh
:8102/train
POST
```
-Parameters of [TrainRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/offline_trainer.py)
+
+### Parameters of [TrainRequest](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/offline_trainer.py)
|key|value|description
-|---|---|---|
+|---|---|---
|name|string|pipeline/model name
-|energy_source|valid key in [PowerSourceMap](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/util/train_types.py)|target energy source to train for
+|energy_source|valid key in [PowerSourceMap](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/util/train_types.py)|target energy source to train for
|trainer|TrainAttribute|attributes for training
|prome_response|json|prom response with workload for power model training
-- TrainAttribute
-
- |key|value|description
- |---|---|---|
- |abs_trainers|list of [available trainer class names](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/trainer)|trainer classes in the pipeline to train for absolute power
- |dyn_trainers|list of [available trainer class names](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/trainer)|trainer classes in the pipeline to train for dynamic power
- |isolator|[valid isolator class name](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/isolator/)|isolator class of the pipeline to isolate the target data to train for dynamic power
- |isolator_args|dict|mapping between isolator-specific argument name and value
+#### TrainAttribute
+|key|value|description
+|---|---|---
+|abs_trainers|list of [available trainer class names](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/trainer)|trainer classes in the pipeline to train for absolute power
+|dyn_trainers|list of [available trainer class names](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/trainer)|trainer classes in the pipeline to train for dynamic power
+|isolator|[valid isolator class name](https://github.com/sustainable-computing-io/kepler-model-server/tree/main/src/train/isolator/)|isolator class of the pipeline to isolate the target data to train for dynamic power
+|isolator_args|dict|mapping between isolator-specific argument name and value
## Posting Model Weights [WIP]
+
**module:** server (src/server/model_server.py)
-```
+
+```sh
/metrics
GET
```
## Online Trainer [WIP]
+
**module:** online trainer (src/train/online_trainer.py)
running as a sidecar to server
-```
+
+```sh
periodically query prometheus metric server on SAMPLING INTERVAL
```
## Profiler [WIP]
-**module:** profiler (src/profile/profiler.py)
-
+**module:** profiler (src/profile/profiler.py)
diff --git a/docs/kepler_model_server/architecture.md b/docs/kepler_model_server/architecture.md
index 460f2be5..8a87a432 100644
--- a/docs/kepler_model_server/architecture.md
+++ b/docs/kepler_model_server/architecture.md
@@ -1,15 +1,14 @@
# Kepler Model Server Architecture
-Kepler model server is a supplementary project of Kepler that facilitates power model training and serving. This provides an ecosystem of Kepler to collect metrics from one environment, train a power model with [pipeline framework](./pipeline.md), and serve back to another environment that a power meter (energy measurement) is not available.
-
-![](../fig/model-server-components-simplified.png)
+Kepler model server is a supplementary project of Kepler that facilitates power model training and serving. This provides an ecosystem of Kepler to collect metrics from one environment, train a power model with [pipeline framework](./pipeline.md), and serve back to another environment that a power meter (energy measurement) is not available.
+![Model server components](../fig/model-server-components-simplified.png)
**Pipeline Input:** Prometheus query results during the training workload war running.
**Pipeline Output:** A directory that contains archived absolute and dynamic power models trained by each available feature group which is labeled by each available energy source.
-```
+```sh
[Pipeline name]/[Energy source]/[Model type]/[Feature group]/[Archived model]
```
@@ -17,10 +16,10 @@ Kepler model server is a supplementary project of Kepler that facilitates power
- [**Energy/Power source**](./pipeline.md#labeling-energy-source) a power meter source of power label.
- [**Model type**](./pipeline.md#idle-powercontrol-plane-power) a type of model with or without background isolation.
- [**Feature group**](./pipeline.md#available-metrics) a utilization metric source of model input.
-- **Archived model** a folder and zip file in the format`[trainer name]_[node type]` where trainer is a name of training solution such as `GradientBoostingRegressor` and `node_type` is a categorized [profile](./node_profile.md) of the server used for training. The folder contains
- - metadata.json
- - model files
- - weight.json (model weight for local estimator supported models such as linear regression (LR))
- - feature engineering (fe) files
-
+- **Archived model** a folder and zip file in the format`[trainer name]_[node type]` where trainer is a name of training solution such as `GradientBoostingRegressor` and `node_type` is a categorized [profile](./node_profile.md) of the server used for training. The folder contains
+ - metadata.json
+ - model files
+ - weight.json (model weight for local estimator supported models such as linear regression (LR))
+ - feature engineering (fe) files
+
Check out the project on GitHub ➡️ [Kepler Model Server](https://github.com/sustainable-computing-io/kepler-model-server).
diff --git a/docs/kepler_model_server/get_started.md b/docs/kepler_model_server/get_started.md
index 083af84c..255ea0fd 100644
--- a/docs/kepler_model_server/get_started.md
+++ b/docs/kepler_model_server/get_started.md
@@ -35,6 +35,7 @@ data:
```
### Select power model
+
---
There are two ways to obtain power model: static and dynamic.
diff --git a/docs/kepler_model_server/pipeline.md b/docs/kepler_model_server/pipeline.md
index 514e67ed..81a43247 100644
--- a/docs/kepler_model_server/pipeline.md
+++ b/docs/kepler_model_server/pipeline.md
@@ -25,7 +25,8 @@ The `train` step is to apply each `trainer` to create multiple choices of power
## Energy source
-`energy source` or `source` refers to the source (power meter) that provides an energy number. Each source provides one or more `energy components`. Currently supported source are shown as below.
+`energy source` or `source` refers to the source (power meter) that provides an energy number. Each
+source provides one or more `energy components`. Currently supported source are shown as below.
Energy/power source|Energy/power components
---|---
@@ -34,7 +35,10 @@ Energy/power source|Energy/power components
## Feature group
-`feature group` is an abstraction of the available features based on the infrastructure context since some environments might not expose some metrics. For example, on the virtual machine in private cloud environment, hardware counter metrics are typically not available. Therefore, the models are trained for each defined resource utilization metric group as below.
+`feature group` is an abstraction of the available features based on the infrastructure context since
+some environments might not expose some metrics. For example, on the virtual machine in private cloud
+environment, hardware counter metrics are typically not available. Therefore, the models are trained
+for each defined resource utilization metric group as below.
Group Name|Features|Kepler Metric Source(s)
---|---|---
@@ -48,7 +52,8 @@ Basic|COUNTER_FEATURES, CGROUP_FEATURES, BPF_FEATURES|All except IRQ and node in
WorkloadOnly|COUNTER_FEATURES, CGROUP_FEATURES, BPF_FEATURES, IRQ_FEATURES, ACCELERATOR_FEATURES|All except node information
Full|WORKLOAD_FEATURES, SYSTEM_FEATURES|All
-Node information refers to value from [kepler_node_info](../design/metrics.md#kepler-metrics-for-node-information) metric.
+Node information refers to value from [kepler_node_info](../design/metrics.md#kepler-metrics-for-node-information)
+metric.
## Power isolation
@@ -58,19 +63,39 @@ The `isolate` step applies a mechanism to separate idle power from absolute powe
It's important to note that both the idle and dynamic `system_processes` power are higher than zero, even when the metric utilization of the users' workload is zero.
-> We have a roadmap to identify and isolate a constant power portion which is significantly increased at a specific resource utilization called `activation power` to fully isolate all constant power consumption from the dynamic power.
+The `isolate` step applies a mechanism to separate idle power from absolute power, resulting in
+dynamic power. It also covers an implementation to separate the dynamic power consumed by background
+and OS processes (referred to as `system_processes`).
-We refer to models trained using the isolate step as `DynPower` models. Meanwhile, models trained without the isolate step are called `AbsPower` models. Currently, the `DynPower` model does not include idle power information, but we plan to incorporate it in the future.
+It's important to note that both the idle and dynamic `system_processes` power are higher than
+zero, even when the metric utilization of the users' workload is zero.
There are two common available `isolators`: *ProfileIsolator* and *MinIdleIsolator*.
-*ProfileIsolator* relies on collecting data (e.g., power and resource utilization) for a specific period without running any user workload (referred to as profile data). This isolation mechanism also eliminates the resource utilization of `system_processes` from the data used to train the model.
+We refer to models trained using the isolate step as `DynPower` models. Meanwhile, models
+trained without the isolate step are called `AbsPower` models. Currently, the `DynPower` model
+does not include idle power information, but we plan to incorporate it in the future.
On the other hand, *MinIdleIsolator* identifies the minimum power consumption among all samples in the training data, assuming that this minimum power consumption represents both the idle power and `system_processes` power consumption.
While we should also remove the minimal resource utilization from the data used to train the model, this isolation mechanism includes the resource utilization by `system_processes` in the training data. However, we plan to remove it in the future.
-If the `profile data` that matches a given `node_type` exist, the pipeline will use the *ProfileIsolator* to preprocess the training data. Otherwise, the the pipeline will applied another isolation mechanism, such as the *MinIdleIsolator*.
+*ProfileIsolator* relies on collecting data (e.g., power and resource utilization) for a
+specific period without running any user workload (referred to as profile data). This
+isolation mechanism also eliminates the resource utilization of `system_processes` from
+the data used to train the model.
+
+On the other hand, *MinIdleIsolator* identifies the minimum power consumption among all
+samples in the training data, assuming that this minimum power consumption represents both
+the idle power and `system_processes` power consumption.
+
+While we should also remove the minimal resource utilization from the data used to train the
+model, this isolation mechanism includes the resource utilization by `system_processes` in the
+training data. However, we plan to remove it in the future.
+
+If the `profile data` that matches a given `node_type` exist, the pipeline will use the
+*ProfileIsolator* to pre-process the training data. Otherwise, the the pipeline will applied
+another isolation mechanism, such as the *MinIdleIsolator*.
(check how profiles are generated [here](./node_profile.md))
@@ -98,3 +123,4 @@ Available trainer (v0.6):
## Node type
Kepler forms multiple groups of machines (nodes) based on its benchmark performance and trains a model separately for each group. The identified group is exported as `node type`.
+
diff --git a/docs/platform-validation/index.md b/docs/platform-validation/index.md
index 084fb2e2..896b3937 100644
--- a/docs/platform-validation/index.md
+++ b/docs/platform-validation/index.md
@@ -1,79 +1,110 @@
# Platform Validation Framework
-In this document, we will share the design and implementation of Kepler's platform validation framework. Please refer to enhancement [document](https://github.com/sustainable-computing-io/kepler/blob/main/enhancements/platform-validation.md) for the feature initiative and scope.
+In this document, we will share the design and implementation of Kepler's platform validation framework.
+Please refer to enhancement [document](https://github.com/sustainable-computing-io/kepler/blob/main/enhancements/platform-validation.md)
+for the feature initiative and scope.
-Kepler should and will integrate with various of hardware platforms, the framework will first use Intel X86 BareMetal platform as example to show the platform validaiton mechanism and workflows. Other platform owners could use this document as reference to add their specific test cases and workflows to make Kepler better engage with their platforms.
+Kepler should and will integrate with various of hardware platforms, the framework will first use Intel X86
+BareMetal platform as example to show the platform validation mechanism and workflows. Other platform owners
+could use this document as reference to add their specific test cases and workflows to make Kepler better
+engage with their platforms.
## Mechanism and methodology
-Platform validation work should be done automatically, could follow the standard Github Action workflow mechanism and let the target platform be self-hosted runner. See Github action offical document for more details about [self-hosted runner](https://docs.github.com/en/actions/hosting-your-own-runners).
+Platform validation work should be done automatically, could follow the standard Github Action workflow
+mechanism and let the target platform be self-hosted runner. See Github action official document for more
+details about [self-hosted runner](https://docs.github.com/en/actions/hosting-your-own-runners).
-Platform validation cases should follow the curent Kepler's Ginkgo test framework.
+Platform validation cases should follow the current Kepler's Ginkgo test framework.
-We could leverage the Ginkgo [Reporting Infrastructure](https://onsi.github.io/ginkgo/#reporting-infrastructure) to generate test report in both human and machine readable formats, such as JSON.
+We could leverage the Ginkgo [Reporting Infrastructure](https://onsi.github.io/ginkgo/#reporting-infrastructure)
+to generate test report in both human and machine readable formats, such as JSON.
Platform validation cases should support both validity and accuracy check on Kepler/Prometheus exposed data.
-Take Intel X86 BareMetal platform's validation as example, we have introduced an independent RAPL-based energy collection and power consumption calculation tool called `validator`.
+Take Intel X86 BareMetal platform's validation as example, we have introduced an independent RAPL-based energy
+collection and power consumption calculation tool called `validator`.
The current work mechanism and features of `validator` are simple:
+
1. It could detect the current platform's CPU model type.
-2. It could detect the current platform's inband RAPL components support status and OOB platform power source support status.
-3. It uses specific sampling count(configurable, by default 20 sampling cycles) with specific sampling interval(configurable, by default 15 seconds) to collect the specific components' RAPL values and calculate the power consumption of each sampling, then achieve the mean value of node components power among those sampling cycles.
+2. It could detect the current platform's in-band RAPL components support status and OOB platform power source
+ support status.
+3. It uses specific sampling count(configurable, by default 20 sampling cycles) with specific sampling interval
+ (configurable, by default 15 seconds) to collect the specific components' RAPL values and calculate the power
+ consumption of each sampling, then achieve the mean value of node components power among those sampling cycles.
-Test cases could use above sampling and calculation results as comparison base to check the Kepler exported and Prometheus aggregated query results.
+Test cases could use above sampling and calculation results as comparison base to check the Kepler exported and
+Prometheus aggregated query results.
-For other platforms, developers may use other specific measurement methods and tools to implement similar validation targets and logic.
+For other platforms, developers may use other specific measurement methods and tools to implement similar
+validation targets and logic.
### Data validity check
-Intel X86 Platforms could have different CPU models which are exported by Kepler, this should be verified in independent way. The case setup and execution should have no extra dependencies on the platform OS and software packages/libraries.
+Intel X86 Platforms could have different CPU models which are exported by Kepler, this should be verified in
+independent way. The case setup and execution should have no extra dependencies on the platform OS and software
+packages/libraries.
-For Intel X86 CPUs, especially the new coming ones, the cpu model detail information comes with `cpuid` tool. On Linux Distros such as Ubuntu, the `cpuid` tool version is often not up-to-date, so it is better way to design an OS agnostic container to perform the test.
+For Intel X86 CPUs, especially the new coming ones, the cpu model detail information comes with `cpuid` tool.
+On Linux Distros such as Ubuntu, the `cpuid` tool version is often not up-to-date, so it is better way to design
+an OS agnostic container to perform the test.
-For Intel X86 platforms, the specific RAPL domains available vary across product segments.
+For Intel X86 platforms, the specific RAPL domains available vary across product segments.
Platforms targeting the client segment support the following RAPL domain hierarchy:
-• Package
+* Package
-• Two power planes: PP0 and PP1 (PP1 may reflect to uncore devices)
+* Two power planes: PP0 and PP1 (PP1 may reflect to uncore devices)
Platforms targeting the server segment support the following RAPL domain hierarchy:
-• Package
-
-• Power plane: PP0
+* Package
-• DRAM
+* Power plane: PP0
-We need to check the current hardware's component power souce support status and check if the exposed data is expected (zero or non-zero).
+* DRAM
+We need to check the current hardware's component power source support status and check if the exposed data is
+expected (zero or non-zero).
### Data accuracy check
-For Intel X86 platforms, since the RAPL based system power collection is available on BareMetal, Kepler uses [Power Ratio Modeling](https://sustainable-computing.io/design/power_model/).
+For Intel X86 platforms, since the RAPL based system power collection is available on BareMetal, Kepler uses
+[Power Ratio Modeling](https://sustainable-computing.io/design/power_model/).
Such power attribution mechanism accuracy should be checked.
We need to introduce an independent and platform agnostic way to collect the node components power info.
-Due to the root priviledge limit on the access of the RAPL SYSFS files, we need to use priviledged container to perform the test.
+Due to the root privilege limit on the access of the RAPL SYSFS files, we need to use privileged container
+to perform the test.
+For node level power accuracy check, the comparison logic is simple and straightforward, while for pod/container
+level power accuracy check, the logic is a little bit complicated and need some assumptions.
-For node level power accuracy check, the comparison logic is simple and strightforward, while for pod/container level power accuracy check, the logic is a little bit complicated and need some assumptions.
+1. RAPL energy is node/package level, so we could only use RAPL sampling delta to calculate the power change
+introduced by application deployment and undeployment.
-(1) RAPL energy is node/package level, so we could only use RAPL sampling delta to calculate the power change introduced by application deployment and undeployment.
+2. On the other hand, application power consumption data could be queried from Prometheus where Kepler's exported
+time-series metrics are aggregated.
-(2) On the other hand, application power consumption data could be queried from Prometheus where Kepler's exported time-series metrics are aggregated.
+3. On a clean/idle platform, without other tenants' workloads interference, we could have a simple assumption that
+the power increase introduced by the specific application deployment is equal to the test container calculated power
+delta in (1).
-(3) On a clean/idle platform, without other tenants' workloads interference, we could have a simple assumption that the power increase introduced by the specific application deployment is equal to the test container calculated power delta in (1).
+4. On a busy platform, with other tenants' workloads running in parallel, in container or in VM, for instance, the
+scenarios become complicated. Since the interference power may be fluctuated and unable to detect, the Kepler
+`Power Ratio Modeling` check criteria is unknown, now we just dump all the available data and leave the result for
+evaluation after platform validation test.
-(4) On a busy platform, with other tenants' workloads running in parallel, in container or in VM, for instance, the scenarios become complicated. Since the interference power may be fluctuated and unable to detect, the Kepler `Power Ratio Modeling` check criteria is unknown, now we just dump all the available data and leave the result for evaluation after platform validation test.
+5. On some CPU/GPU-intensive workloads deployment platform, such as AI/AIGC inference pipeline's worker node, the
+`validator` raw data could be used to measure the approximate power consumption contribution of those inferencing
+pods/containers. We can then compare the Kepler demonstrated power consumption data, either in PromQL query
+result(automation) or on Grafana Dashboard(manual test).
-(5) On some CPU/GPU-intensive workloads deployment platform, such as AI/AIGC inference pipeline's worker node, the `validator` raw data could be used to measure the approximate power consumption contribution of those inferencing pods/containers. We can then compare the Kepler demonstrated power consumption data, either in PromQL query result(automation) or on Grafana Dashboard(manual test).
-
-This is also meaningful test case for the carbon footprint accuracy check on AI/AIGC workloads.
+This is also meaningful test case for the carbon footprint accuracy check on AI/AIGC workloads.
## Validation workflow
@@ -85,9 +116,11 @@ The workflow is similar as the e2e integration test workflow, has two phases job
Build both the `Kepler` container image and the `Validator` container image.
-The current Validator image refers to the Kepler image build mechanism to build from a GPU image to support GPU case. We may simply build it from UBI image later.
+The current Validator image refers to the Kepler image build mechanism to build from a GPU image to support GPU
+case. We may simply build it from UBI image later.
-Since it is an independent energy collector other than Kepler, those eBPF stuffs will not be bumped into the container.
+Since it is an independent energy collector other than Kepler, those eBPF stuffs will not be bumped into the
+container.
### Platform validation test
@@ -95,38 +128,49 @@ The major steps in this job are as follows:
1. Download artifacts and import images.
-2. Run `validator` image to collect energy and calculate node components power consumption data before deploying local-dev-cluster(Kind in this example)
+2. Run `validator` image to collect energy and calculate node components power consumption data before deploying
+ local-dev-cluster(Kind in this example)
-3. Use Kepler action to deploy local-dev-cluster(Kind) with local registry support and full kube-prometheus monitoring stack.
+3. Use Kepler action to deploy local-dev-cluster(Kind) with local registry support and full kube-prometheus
+ monitoring stack.
-4. Run `validator` image again to collect energy and calculate node components power consumption data before deploying Kepler exporter.
+4. Run `validator` image again to collect energy and calculate node components power consumption data before
+ deploying Kepler exporter.
5. Deploy Kepler exporter in local-dev-cluster above and let it be monitored by Prometheus.
-6. Run `validator` image the 3rd time to collect energy and calculate node components power consumption data after deploying Kepler exporter.
+6. Run `validator` image the 3rd time to collect energy and calculate node components power consumption data
+ after deploying Kepler exporter.
7. Run platform-validation test scripts, do below things:
-(1) `port forwarding` kepler-exporter and prometheus-k8s service to let their ports accessible to Ginkgo test code.
-
-(2) Use `ginkgo` CLI to run [test suite](https://github.com/sustainable-computing-io/kepler/blob/main/e2e/platform-validation), execute cases described in section [Mechanism and methodology](https://github.com/sustainable-computing-io/kepler-doc/blob/main/docs/platform-validation/index.md#mechanism-and-methodology) above and generate test report in `platform_validation_report.json` file assigned by `--json-report` parameter.
+ 1. `port forwarding` kepler-exporter and prometheus-k8s service to let their ports accessible to Ginkgo
+ test code.
-(3) Dump necessary test log and data for post-test evaluation use.
+ 2. Use `ginkgo` CLI to run [test suite](https://github.com/sustainable-computing-io/kepler/blob/main/e2e/platform-validation),
+ execute cases described in section [Mechanism and methodology](https://github.com/sustainable-computing-io/kepler-doc/blob/main/docs/platform-validation/index.md#mechanism-and-methodology)
+ above and generate test report in `platform_validation_report.json` file assigned by `--json-report` parameter.
-In specific test cases, the power data delta of step 4 and 2 could be comparison base for local-dev-cluster's power consumption; while the power data delta of step 6 and 4 could be the kepler exporter's power comparison base.
+ 3. Dump necessary test log and data for post-test evaluation use.
8. Cleanup the test environment(undeploy Kepler exporter, the local-dev-cluster and the local registry).
+> **Note**: In specific test cases, the power data delta of step 4 and 2 could be comparison base for
+local-dev-cluster's power consumption; while the power data delta of step 6 and 4 could be the kepler
+exporter's power comparison base.
## Validation result evaluation
Check validation report for any failure cases.
-Check accuracy cases comparison result. Node level cases should be more accurate than pod/container level cases, make analysis on any abnormal results and figure out RCAs for them.
+Check accuracy cases comparison result. Node level cases should be more accurate than pod/container
+level cases, make analysis on any abnormal results and figure out RCAs for them.
-Manual/automation check for specific workloads on specific platforms, see example in (5) of `Data accuracy check` section above.
+Manual/automation check for specific workloads on specific platforms, see example in (5) of
+`Data accuracy check` section above.
-TODO: There is an analyzer under developing for abnormality detection in platform validation test, it will be integrated into the current workflow when it is ready.
+**TODO**: There is an analyzer under developing for abnormality detection in platform validation test,
+it will be integrated into the current workflow when it is ready.
## Further works
@@ -134,7 +178,5 @@ Identify issues based on the validation report on specific platforms and figure
Extend platform scenario from BareMetal to VM.
-Design validation cases for VM scenario and support more accurate power attribution among VM-based workloads and container/process-based workloads.
-
-
-
+Design validation cases for VM scenario and support more accurate power attribution among VM-based
+workloads and container/process-based workloads.
diff --git a/docs/project/contributing.md b/docs/project/contributing.md
index faa1b4b8..03f3506a 100644
--- a/docs/project/contributing.md
+++ b/docs/project/contributing.md
@@ -4,9 +4,9 @@ We welcome all kinds of contributions to Kepler from the community!
For an in-depth guide on how to get started, checkout the Contributing Guide [here](https://github.com/sustainable-computing-io/kepler/blob/main/CONTRIBUTING.md).
-# Kepler adopters
+## Kepler adopters
-You and your organisation are using Kepler? That’s awesome. We would love to hear from you! 💚
+You and your organisation are using Kepler? That's awesome. We would love to hear from you! 💚
## Adding your organisation
@@ -20,11 +20,13 @@ To do so follow these steps:
2. Clone it locally with `git clone https://github.com//kepler-doc.git`.
3. (Optional) Add the logo of your organisation to docs/fig/logos. Good practice is for the logo to be called e.g. MY-ORG.png (=> docs/fig/logos/default.svg is the Kepler logo, it is used when no organisation logo is provided.)
4. Add an entry to the YAML file with the name of your organisation, url that links to its website, and the path to the logo. Example:
-```
- - name: Kepler
- url: https://sustainable-computing.io/
- logo: logos/kepler.svg
-```
+
+ ```yaml
+ - name: Kepler
+ url: https://sustainable-computing.io/
+ logo: logos/kepler.svg
+ ```
+
5. Save the file, then do `git add -A` and commit using `git commit -s -m "Add MY-ORG to adopters"` (commit signoff is required, see [DCO of the kepler project](https://github.com/sustainable-computing-io/kepler/blob/main/DCO)).
6. Push the commit with `git push origin main`.
-7. Open a Pull Request to [kepler-doc](https://github.com/sustainable-computing-io/kepler-doc)
\ No newline at end of file
+7. Open a Pull Request to [kepler-doc](https://github.com/sustainable-computing-io/kepler-doc)
diff --git a/docs/project/support.md b/docs/project/support.md
index 86c78b6a..bd581882 100644
--- a/docs/project/support.md
+++ b/docs/project/support.md
@@ -1,8 +1,7 @@
# Support
+## The best ways to seek support are
-The best ways to seek support are
----------------------------------
1. Opening an [issue](https://github.com/sustainable-computing-io/kepler/issues) in Kepler.
-2. Starting a [discussion](https://github.com/sustainable-computing-io/kepler/discussions)
\ No newline at end of file
+2. Starting a [discussion](https://github.com/sustainable-computing-io/kepler/discussions)
diff --git a/docs/usage/deep_dive.md b/docs/usage/deep_dive.md
index 96ee83bd..d0dd2ffc 100644
--- a/docs/usage/deep_dive.md
+++ b/docs/usage/deep_dive.md
@@ -14,8 +14,8 @@ Kepler uses the following to collects power data:
#### EBPF, Hardware Counters, cGroups
-Kepler can utilize a BPF program integrated into the kernel’s pathway to extract process-related resource utilization metrics or use metrics from Hardware Counters or cGroups.
-The type of metrics used to build the model can differ based on the system’s environment.
+Kepler can utilize a BPF program integrated into the kernel's pathway to extract process-related resource utilization metrics or use metrics from Hardware Counters or cGroups.
+The type of metrics used to build the model can differ based on the system's environment.
For example, it might use hardware counters, or metrics from tools like eBPF or cGroups, depending on what is available in the system that will use the model.
#### Real-time Component Power Meters
@@ -43,10 +43,10 @@ The Model Server trains its models using Prometheus metrics from a specific bare
When creating the power model, the Model Server uses a regression algorithm. It keeps training the model until it reaches an acceptable level of accuracy.
Once trained, the Model Server makes these models accessible through a github repository, where any Kepler deployment can download the model from.
-Kepler then uses these models to calculate how much power a node (VM) consumes based on the way its resources are being used. The type of metrics used to build the model can differ based on the system’s environment.
+Kepler then uses these models to calculate how much power a node (VM) consumes based on the way its resources are being used. The type of metrics used to build the model can differ based on the system's environment.
For example, it might use hardware counters, or metrics from tools like eBPF or cGroups, depending on what is available in the system that will use the model.
-![](../fig/power_model_training.jpg)
+![Power model training](../fig/power_model_training.jpg)
For details on the architecture follow the [documentation](https://sustainable-computing.io/kepler_model_server/architecture/) on Kepler Model Server.
@@ -55,20 +55,20 @@ For details on the architecture follow the [documentation](https://sustainable-c
Depending on the environment that Kepler was deployed in, the system power consumption metrics collection will vary.
For example, consider the figure below, Kepler can be deployed either through BMs or VMs environments.
-![](../fig/vms_versus_bms.jpg)
+![VMs vs BMs](../fig/vms_versus_bms.jpg)
#### Direct Real-Time System Power Metrics (Bare Metals)
In bare-metal environments that allow the direct collection of real-time system power metrics, Kepler can split the power consumption of a given system resource using the Ratio Power model.
The APIs that expose the real-time power metrics export the absolute power, which is the sum of the dynamic and idle power.
To be more specific, the dynamic power is directly related to the resource utilization and the idle power is the constant power that does not vary regardless if the system is at rest or with load.
-This concept is important because the idle and dynamic power are splitted differently across all processes.
+This concept is important because the idle and dynamic power are split differently across all processes.
#### Estimated System Power Metrics (Virtual Machines)
In VM environments on public clouds, there is currently no direct way to measure the power that a VM consumes. Therefore, we need to estimate the power using a trained power model, which has some limitations that impact the model accuracy.
-Kepler can estimate the dynamic power consumption of VMs using trained power models. Then, after estimating each VM’s power consumption, Kepler applies the Ratio Power Model to estimate the processes’ power consumption.
+Kepler can estimate the dynamic power consumption of VMs using trained power models. Then, after estimating each VM's power consumption, Kepler applies the Ratio Power Model to estimate the processes' power consumption.
However, since VMs usually do not provide hardware counters, Kepler uses eBPF metrics instead of hardware counters to calculate the ratios.
It is important to highlight that trained power models used for VMs on a public cloud cannot split the idle power of a resource because we cannot know how many other VMs are running in the host.
We provide more details in the limitation section in this blog. Therefore, Kepler does not expose the idle power of a container running on top of a VM.
@@ -87,13 +87,13 @@ Then, by using the VM power consumption, another Kepler instance within the VM c
### Ratio Power Model Explained
As explained earlier the dynamic power is directly related to the resource utilization and the idle power is the constant power that does not vary regardless if the system is at rest or with load.
-This concept is important because the idle and dynamic power are splitted differently across all processes. Now we can describe the Ratio Power model, which divides the dynamic power across all processes.
+This concept is important because the idle and dynamic power are split differently across all processes. Now we can describe the Ratio Power model, which divides the dynamic power across all processes.
-The Ratio Power model calculates the ratio of a process’s resource utilization to the entire system’s resource utilization and then multiplying this ratio by the dynamic power consumption of a resource.
+The Ratio Power model calculates the ratio of a process's resource utilization to the entire system's resource utilization and then multiplying this ratio by the dynamic power consumption of a resource.
This allows us to accurately estimate power usage based on actual resource utilization, ensuring that if, for instance, a program utilizes 10% of the CPU, it consumes 10% of the total CPU power.
-The idle power estimation follows the [GreenHouse Gas (GHG) protocol guideline](https://www.gesi.org/research/ict-sector-guidance-built-on-the-ghg-protocol-product-life-cycle-accounting-and-reporting-standard), which defines that the constant host idle power should be splitted among processes/containers based on their size (relative to the total size of other containers running on the host).
-Additionally, it’s important to note that different resource utilizations are estimated differently in Kepler.
+The idle power estimation follows the [GreenHouse Gas (GHG) protocol guideline](https://www.gesi.org/research/ict-sector-guidance-built-on-the-ghg-protocol-product-life-cycle-accounting-and-reporting-standard), which defines that the constant host idle power should be split among processes/containers based on their size (relative to the total size of other containers running on the host).
+Additionally, it's important to note that different resource utilizations are estimated differently in Kepler.
We utilize hardware counters to assess resource utilization in bare-metal environments, using CPU instructions to estimate CPU utilization, collecting cache misses for memory utilization, and assessing Streaming Multiprocessor (SM) utilization for GPUs utilization.
### How is the power consumption attribution done?
@@ -103,15 +103,15 @@ Now that we have explained how Kepler gathers data and train model and the Ratio
Once all the data that is related to energy consumption and resource utilization are collected, Kepler can calculate the energy consumed by each process. This is done by dividing the power used by a given resource based on the ratio of the process and system resource utilization. We will detail this model later on in this blog.
Then, with the power consumption of the processes, Kepler aggregates the power into containers and Kubernetes Pods levels. The data collected and estimated for the container are then stored by Prometheus.
-Kepler finds which container a process belongs to by using the Process ID (PID) information collected in the BPF program, and then using the container ID, we can correlate it to the pods’ name.
+Kepler finds which container a process belongs to by using the Process ID (PID) information collected in the BPF program, and then using the container ID, we can correlate it to the pods' name.
More specifically, the container ID comes from `/proc/PID/cgroup`, and Kepler uses the Kubernetes APIServer to keep an updated list of pods that are created and removed from the node.
-The Process IDs that do not correlate with a Kubernetes container are classified as “system processes” (including PID 0).
+The Process IDs that do not correlate with a Kubernetes container are classified as `system processes` (including PID 0).
***In the future, processes that run VMs will be associated with VM IDs so that Kepler can also export VM metrics.***
### Pre-trained Power Model Limitations
-It’s important to note that pre-trained power models have their limitations when compared to power models using real-time power metrics.
+It's important to note that pre-trained power models have their limitations when compared to power models using real-time power metrics.
- **System-Specific Models:** Pre-trained power models are system-specific and vary based on CPU models and architectures.
While not perfect, generic models can offer insights into application power consumption, aiding energy-efficient decisions.
diff --git a/docs/usage/general_config.md b/docs/usage/general_config.md
index 50c208ba..7f1b5dc4 100644
--- a/docs/usage/general_config.md
+++ b/docs/usage/general_config.md
@@ -1,58 +1,59 @@
# Configuration
+
This is a list of configurable values of Kepler System. The configuration can be also applied by defining the following CR spec if Kepler Operator is installed.
|Point of Configuration|Spec|Description|Default|
----|---|---|---|
-***Kepler CR*** (single item: default)|||
-Kepler DaemonSet Deployment|daemon.exporter.image|Kepler main image|quay.io/sustainable_computing_io/kepler:latest|
-Kepler DaemonSet Deployment|daemon.exporter.port|Metric exporter port|9102|
-Kepler DaemonSet Deployment|daemon.estimator-sidecar.enabled|[Kepler Estimator Sidecar](./../kepler_model_server/get_started.md#step-3-learn-how-to-use-the-power-model) patch|false|
-Kepler DaemonSet Deployment|daemon.estimator-sidecar.image|Kepler estimator sidecar image
-Kepler DaemonSet Deployment|daemon.estimator-sidecar.mnt-path|Mount path between main container and the sidecar for unix domain socket|/tmp
-Kepler DaemonSet Environment (METRIC_PATH)|daemon.exporter.path|Path to export metrics|/metrics
-Kepler DaemonSet Environment (MODEL_SERVER_ENABLE)|model-server.enaled|[Kepler Model Server Pod](./../kepler_model_server/get_started.md#step-2-learn-how-to-obtain-power-model) connection|false
-*[*model-server.enaled*]*|
-Model Server Pod Pod Environment (MODEL_SERVER_PORT)|model-server.port|Model serving port of model server|8100
-Model Server Pod Pod Environment (PROM_SERVER)|model-server.prom|Endpoint to Prometheus metric server |http://prometheus-k8s.monitoring.svc.cluster.local:9090
-Model Server Pod Pod Environment (MODEL_PATH)|model-server.model-path|Path to keep models|models
-Kepler DaemonSet Environment (MODEL_SERVER_ENDPOINT)|daemon.model-server|Endpoint to server container of model server|http://kepler-model-server.monitoring.cluster.local:[*model-server.port*]/model
-Model Server Pod Deployment|model-server.trainer|Model online trainer patch|false
-*[*model-server.trainer*]*|
-Model Server Pod Environment (PROM_QUERY_INTERVAL)|model-server.prom_interval|Interval to execute trainning pipelines in seconds|20
-Model Server Pod Environment (PROM_QUERY_STEP)|model-server.prom-step|Step of query data point in trainning pipelines in seconds|3
-Model Server Pod Environment (PROM_HEADERS)|model-server.prom-header|For specify required header (such as authentication)|-
-Model Server Pod Environment (PROM_SSL_DISABLE)|model-server.prom-ssl|Disable ssl in Prometheus connection|true
-Model Server Pod Environment (INITIAL_MODELS_LOC)|model-server.init-loc|Root URL of offline models to use as initial models|https://raw.githubusercontent.com/sustainable-computing-io/kepler-model-server/main/tests/test_models
-Model Server Pod Environment (INITIAL_MODEL_NAMES.[MODEL_TYPE])|model-server.[MODEL_TYPE]|Name of default pipeline for each model type|-
-***CollectMetric CR*** (single item: default)|||
-Kepler DaemonSet Environment (COUNTER_METRICS)|counter|List of performance metrics to enable from counter source| * (enable all available metrics from counter source)
-Kepler DaemonSet Environment (CGROUP_METRICS)|cgroup|List of performance metrics to enable from cgroup source| * (enable all available metrics from cgroup source)
-Kepler DaemonSet Environment (BPF_METRICS)|bpf|List of performance metrics to enable from bpf (aka. ebpf) source| * (enable all available metrics from bpf source)
-Kepler DaemonSet Environment (GPU_METRICS)|gpu|List of performance metrics to enable from gpu source| * (enable all available metrics from gpu source)
-***ExportMetric CR*** (single item: default)|||
-Kepler DaemonSet Environment (PERF_METRICS)|perf|List of performance metrics to export | * (enable all collected performance metrics)
-Kepler DaemonSet Environment (EXPORT_NODE_TOTAL_POWER)|node_total_power|Toggle whether to export node total power| true
-Kepler DaemonSet Environment (EXPORT_NODE_COMPONENT_POWERS)|node_component_powers|Toggle whether to export node powers by components| true
-Kepler DaemonSet Environment (EXPORT_POD_TOTAL_POWER)|pod_total_power|Toggle whether to export pod total power| true
-Kepler DaemonSet Environment (EXPORT_POD_COMPONENT_POWERS)|pod_component_powers|Toggle whether to export pod powers by components| true
-***EstimatorConfig CR*** (multiple items: node-total-power, node-component-powers, pod-total-power, pod-component-powers)|||
-Kepler DaemonSet Environment (MODEL_CONFIG.[MODEL_ITEM]_ESTIMATOR)|use-sidecar|Toggle whether to use estimator sidecar for power estimation|false
-Kepler DaemonSet Environment (MODEL_CONFIG.[MODEL_ITEM]_MODEL)|fixed-model|Specify model name| (auto-selected)
-Kepler DaemonSet Environment (MODEL_CONFIG.[MODEL_ITEM]_FILTERS)|filters|Specify model filter conditions in string| (auto-selected)
-Kepler DaemonSet Environment (MODEL_CONFIG.[MODEL_ITEM]_INIT_URL)|init-url|URL to initial model location| -
-***RatioConfig CR*** (single items: default)|||
-Kepler DaemonSet Environment (CORE_USAGE_METRIC)|core_metric|Specify metric for compute core (mostly cpu) component of pod power by ratio modeling|cpu_instr
-Kepler DaemonSet Environment (DRAM_USAGE_METRIC)|dram_metric|Specify metric for computing dram component of pod power by ratio modeling|cache_miss
-Kepler DaemonSet Environment (UNCORE_USAGE_METRIC)|uncore_metric|Specify metric for computing uncore component of pod power by ratio modeling|(evenly divided)
-Kepler DaemonSet Environment (GENERAL_USAGE_METRIC)|general_metric|Specify metric for computing uncategorized component (pkg-core-uncore) of pod power by ratio modeling|cpu_instr
-Kepler DaemonSet Environment (GPU_USAGE_METRIC)|core_metric|Specify metric for computing gpu component of pod power by ratio modeling|-
+|---|---|---|---|
+|***Kepler CR*** (single item: default)||||
+|Kepler DaemonSet Deployment|daemon.exporter.image|Kepler main image|quay.io/sustainable_computing_io/kepler:latest|
+|Kepler DaemonSet Deployment|daemon.exporter.port|Metric exporter port|9102|
+|Kepler DaemonSet Deployment|daemon.estimator-sidecar.enabled|[Kepler Estimator Sidecar](./../kepler_model_server/get_started.md#step-3-learn-how-to-use-the-power-model) patch|false|
+|Kepler DaemonSet Deployment|daemon.estimator-sidecar.image|Kepler estimator sidecar image|-|
+|Kepler DaemonSet Deployment|daemon.estimator-sidecar.mnt-path|Mount path between main container and the sidecar for unix domain socket|/tmp|
+|Kepler DaemonSet Environment (METRIC_PATH)|daemon.exporter.path|Path to export metrics|/metrics|
+|Kepler DaemonSet Environment (MODEL_SERVER_ENABLE)|model-server.enabled|[Kepler Model Server Pod](./../kepler_model_server/get_started.md#step-2-learn-how-to-obtain-power-model) connection|false|
+|**model-server.enabled**||||
+|Model Server Pod Pod Environment (MODEL_SERVER_PORT)|model-server.port|Model serving port of model server|8100|
+|Model Server Pod Pod Environment (PROM_SERVER)|model-server.prom|Endpoint to Prometheus metric server |`http://prometheus-k8s.monitoring.svc.cluster.local:9090`|
+|Model Server Pod Pod Environment (MODEL_PATH)|model-server.model-path|Path to keep models|models|
+|Kepler DaemonSet Environment (MODEL_SERVER_ENDPOINT)|daemon.model-server|Endpoint to server container of model server|`http://kepler-model-server.monitoring.cluster.local:[model-server.port]/model`|
+|Model Server Pod Deployment|model-server.trainer|Model online trainer patch|false|
+|**model-server.trainer**||||
+|Model Server Pod Environment (PROM_QUERY_INTERVAL)|model-server.prom_interval|Interval to execute training pipelines in seconds|20|
+|Model Server Pod Environment (PROM_QUERY_STEP)|model-server.prom-step|Step of query data point in training pipelines in seconds|3|
+|Model Server Pod Environment (PROM_HEADERS)|model-server.prom-header|For specify required header (such as authentication)|-|
+|Model Server Pod Environment (PROM_SSL_DISABLE)|model-server.prom-ssl|Disable ssl in Prometheus connection|true|
+|Model Server Pod Environment (INITIAL_MODELS_LOC)|model-server.init-loc|Root URL of offline models to use as initial models|`https://raw.githubusercontent.com/sustainable-computing-io/kepler-model-server/main/tests/test_models`|
+|Model Server Pod Environment (INITIAL_MODEL_NAMES.[`MODEL_TYPE`])|model-server.[`MODEL_TYPE`]|Name of default pipeline for each model type|-|
+|***CollectMetric CR*** (single item: default)||||
+|Kepler DaemonSet Environment (COUNTER_METRICS)|counter|List of performance metrics to enable from counter source| * (enable all available metrics from counter source)|
+|Kepler DaemonSet Environment (CGROUP_METRICS)|cgroup|List of performance metrics to enable from cgroup source| * (enable all available metrics from cgroup source)|
+|Kepler DaemonSet Environment (BPF_METRICS)|bpf|List of performance metrics to enable from bpf (aka. eBPF) source| * (enable all available metrics from bpf source)|
+|Kepler DaemonSet Environment (GPU_METRICS)|gpu|List of performance metrics to enable from gpu source| * (enable all available metrics from gpu source)|
+|***ExportMetric CR*** (single item: default)||||
+|Kepler DaemonSet Environment (PERF_METRICS)|perf|List of performance metrics to export | * (enable all collected performance metrics)|
+|Kepler DaemonSet Environment (EXPORT_NODE_TOTAL_POWER)|node_total_power|Toggle whether to export node total power| true|
+|Kepler DaemonSet Environment (EXPORT_NODE_COMPONENT_POWERS)|node_component_powers|Toggle whether to export node powers by components| true|
+|Kepler DaemonSet Environment (EXPORT_POD_TOTAL_POWER)|pod_total_power|Toggle whether to export pod total power| true|
+|Kepler DaemonSet Environment (EXPORT_POD_COMPONENT_POWERS)|pod_component_powers|Toggle whether to export pod powers by components| true|
+|***EstimatorConfig CR*** (multiple items: node-total-power, node-component-powers, pod-total-power, pod-component-powers)||||
+|Kepler DaemonSet Environment (MODEL_CONFIG.[`MODEL_ITEM`]_ESTIMATOR)|use-sidecar|Toggle whether to use estimator sidecar for power estimation|false|
+|Kepler DaemonSet Environment (MODEL_CONFIG.[`MODEL_ITEM`]_MODEL)|fixed-model|Specify model name| (auto-selected)|
+|Kepler DaemonSet Environment (MODEL_CONFIG.[`MODEL_ITEM`]_FILTERS)|filters|Specify model filter conditions in string| (auto-selected)|
+|Kepler DaemonSet Environment (MODEL_CONFIG.[`MODEL_ITEM`]_INIT_URL)|init-url|URL to initial model location| -|
+|***RatioConfig CR*** (single items: default)||||
+|Kepler DaemonSet Environment (CORE_USAGE_METRIC)|core_metric|Specify metric for compute core (mostly cpu) component of pod power by ratio modeling|cpu_instr|
+|Kepler DaemonSet Environment (DRAM_USAGE_METRIC)|dram_metric|Specify metric for computing dram component of pod power by ratio modeling|cache_miss|
+|Kepler DaemonSet Environment (UNCORE_USAGE_METRIC)|uncore_metric|Specify metric for computing uncore component of pod power by ratio modeling|(evenly divided)|
+|Kepler DaemonSet Environment (GENERAL_USAGE_METRIC)|general_metric|Specify metric for computing uncategorized component (pkg-core-uncore) of pod power by ratio modeling|cpu_instr|
+|Kepler DaemonSet Environment (GPU_USAGE_METRIC)|core_metric|Specify metric for computing gpu component of pod power by ratio modeling|-|
**Remarks:**
-- [MODEL_ITEM] can be either of the following values corresponding to item names: *NODE_TOTAL*, *NODE_COMPONENT*, *POD_TOTAL*, *POD_COMPONENTS*.
-- [MODEL_TYPE] is a concatenation of [MODEL_ITEM] and [OUTPUT_FORMAT] which can be either *POWER* for archived model or *MODEL_WEIGHT* for weights in json.
+- [`MODEL_ITEM`] can be either of the following values corresponding to item names: `NODE_TOTAL`, `NODE_COMPONENT`, `POD_TOTAL`, `POD_COMPONENTS`.
+- [`MODEL_TYPE`] is a concatenation of [`MODEL_ITEM`] and [`OUTPUT_FORMAT`] which can be either `POWER` for archived model or `MODEL_WEIGHT` for weights in json.
+
+ For example:
- For example,
-
- * *NODE_TOTAL_POWER*: archived model to estimate node total power used by estimator sidecar
- * *POD_COMPONENTS_MODEL_WEIGHT*: model weight to estimate pod component powers used by linear regressor embedded in Kepler main component.
+ - `NODE_TOTAL_POWER`: archived model to estimate node total power used by estimator sidecar
+ - `POD_COMPONENTS_MODEL_WEIGHT`: model weight to estimate pod component powers used by linear regressor embedded in Kepler main component.
diff --git a/docs/usage/kepler_daemon.md b/docs/usage/kepler_daemon.md
index e54d4204..9327002c 100644
--- a/docs/usage/kepler_daemon.md
+++ b/docs/usage/kepler_daemon.md
@@ -1,40 +1,42 @@
# Kepler DaemonSet Customization
+
Kepler enables a function to hybrid read environment variable from attributes directly (container.env) and from the ConfigMap. Note that, all steps will be operated by [Kepler Operator](https://github.com/sustainable-computing-io/kepler-operator) if the operator is installed.
To set environments by ConfigMap:
1. Create/Generate ConfigMap
-```yaml
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: kepler-cfm
- namespace: kepler-system
-data:
- MODEL_SERVER_ENABLE: true
- COUNTER_METRICS: '*'
- CGROUP_METRICS: '*'
- BPF_METRICS: '*'
- # KUBELET_METRICS: ''
- # GPU_METRICS: ''
- PERF_METRICS: '*'
- MODEL_CONFIG: |
- POD_COMPONENT_ESTIMATOR=true
- POD_COMPONENT_INIT_URL=https://raw.githubusercontent.com/sustainable-computing-io/kepler-model-server/main/tests/test_models/DynComponentPower/CgroupOnly/ScikitMixed.zip
-```
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: kepler-cfm
+ namespace: kepler-system
+ data:
+ MODEL_SERVER_ENABLE: true
+ COUNTER_METRICS: '*'
+ CGROUP_METRICS: '*'
+ BPF_METRICS: '*'
+ # KUBELET_METRICS: ''
+ # GPU_METRICS: ''
+ PERF_METRICS: '*'
+ MODEL_CONFIG: |
+ POD_COMPONENT_ESTIMATOR=true
+ POD_COMPONENT_INIT_URL=https://raw.githubusercontent.com/sustainable-computing-io/kepler-model-server/main/tests/test_models/DynComponentPower/CgroupOnly/ScikitMixed.zip
+ ```
2. Mount the ConfigMap to DaemonSet:
-```yaml
- spec:
- containers:
- - name: kepler-exporter
- volumeMounts:
- - name: cfm
- mountPath: /etc/config
- readOnly: true
- volumes:
- - name: cfm
- configMap:
- name: kepler-cfm
-```
\ No newline at end of file
+
+ ```yaml
+ spec:
+ containers:
+ - name: kepler-exporter
+ volumeMounts:
+ - name: cfm
+ mountPath: /etc/config
+ readOnly: true
+ volumes:
+ - name: cfm
+ configMap:
+ name: kepler-cfm
+ ```
diff --git a/docs/usage/trouble_shooting.md b/docs/usage/trouble_shooting.md
index a938fe10..bc6d727d 100644
--- a/docs/usage/trouble_shooting.md
+++ b/docs/usage/trouble_shooting.md
@@ -1,11 +1,16 @@
# Trouble Shooting
## Kepler Pod failed to start
+
### Background
-Kepler uses eBPF to obtain performance counter readings and processes stats. Since eBPF requires kernel headers, Kepler will fail to start up when the kernel headers are missing.
+
+Kepler uses eBPF to obtain performance counter readings and processes stats. Since eBPF requires kernel
+headers, Kepler will fail to start up when the kernel headers are missing.
### Diagnose
-To confirm, check the Kepler Pod logs with the following command and look for message `not able to load eBPF modules`.
+
+To confirm, check the Kepler Pod logs with the following command and look for message
+`not able to load eBPF modules`.
```bash
kubectl logs -n kepler daemonset/kepler-exporter
@@ -26,10 +31,15 @@ On OpenShift, install the MachineConfiguration [here](https://github.com/sustain
## Kepler energy metrics are zeroes
+
### Background
-Kepler uses RAPL counters on x86 platforms to read energy consumption.
-VMs do not have RAPL counters and thus Kepler estimates energy consumption based on the pre-trained ML models. The models use either hardware performance counters or cGroup stats to estimate energy consumed by processes. Currently the cGroup based models use cGroup v2 features such as `cgroupfs_cpu_usage_us`, `cgroupfs_memory_usage_bytes`, `cgroupfs_system_cpu_usage_us`, `cgroupfs_user_cpu_usage_us`, `bytes_read`, and `bytes_writes`.
+Kepler uses RAPL counters on x86 platforms to read energy consumption.
+VMs do not have RAPL counters and thus Kepler estimates energy consumption based on the pre-trained
+ML models. The models use either hardware performance counters or cGroup stats to estimate energy
+consumed by processes. Currently the cGroup based models use cGroup v2 features such as
+`cgroupfs_cpu_usage_us`, `cgroupfs_memory_usage_bytes`, `cgroupfs_system_cpu_usage_us`,
+`cgroupfs_user_cpu_usage_us`, `bytes_read`, and `bytes_writes`.
### Diagnose
@@ -40,7 +50,8 @@ ls /sys/fs/cgroup/cgroup.controllers
```
### Solution
+
Enable cGroup v2 on the node by following [these Kubernetes instruction](https://kubernetes.io/docs/concepts/architecture/cgroups/).
-On OpenShift, apply [these cGroup v2 MachineConfiguration](https://github.com/sustainable-computing-io/kepler/tree/main/manifests/config/cluster-prereqs)
\ No newline at end of file
+On OpenShift, apply [these cGroup v2 MachineConfiguration](https://github.com/sustainable-computing-io/kepler/tree/main/manifests/config/cluster-prereqs)
diff --git a/index.md b/index.md
index f9c11cf4..69455a0c 100644
--- a/index.md
+++ b/index.md
@@ -3,4 +3,3 @@
Kepler (Kubernetes-based Efficient Power Level Exporter) is a Prometheus exporter. It uses eBPF to probe CPU performance counters and Linux kernel tracepoints. These data and stats from cgroup and sysfs are fed into ML models to estimate energy consumption by Pods.
Check us out on GitHub ➡️ [Kepler](https://github.com/sustainable-computing-io/kepler).
-
diff --git a/templates/adopters.md b/templates/adopters.md
index 1fa49021..95902eac 100644
--- a/templates/adopters.md
+++ b/templates/adopters.md
@@ -5,9 +5,9 @@ description: >
On this page you can see a selection of organisations who self-identified as using Kepler.
---
-# Kepler Adopters
+## Kepler Adopters
-Organisations below all are using Kepler.
+Organizations below all are using Kepler.
To join this list, please follow [these instructions](https://sustainable-computing.io/project/contributing/).
@@ -18,4 +18,4 @@ To join this list, please follow [these instructions](https://sustainable-comput
{{ end }}
[{{.name}}]({{.url}})
-{{ end }}
\ No newline at end of file
+{{ end }}