This document covers the installation and configuration of containerd
and Kata Containers. The containerd provides not only the ctr
command line tool, but also the CRI
interface for Kubernetes and other CRI clients.
This document is primarily written for Kata Containers v1.5.0-rc2 or above, and containerd v1.2.0 or above. Previous versions are addressed here, but we suggest users upgrade to the newer versions for better support.
RuntimeClass
is a Kubernetes feature first
introduced in Kubernetes 1.12 as alpha. It is the feature for selecting the container runtime configuration to
use to run a pod’s containers. This feature is supported in containerd
since v1.2.0.
Before the RuntimeClass
was introduced, Kubernetes was not aware of the difference of runtimes on the node. kubelet
creates Pod sandboxes and containers through CRI implementations, and treats all the Pods equally. However, there
are requirements to run trusted Pods (i.e. Kubernetes plugin) in a native container like runc, and to run untrusted
workloads with isolated sandboxes (i.e. Kata Containers).
As a result, the CRI implementations extended their semantics for the requirements:
- At the beginning, Frakti checks the network configuration of a Pod, and
treat Pod with
host
network as trusted, while others are treated as untrusted. - The containerd introduced an annotation for untrusted Pods since v1.0:
annotations: io.kubernetes.cri.untrusted-workload: "true"
- Similarly, CRI-O introduced the annotation
io.kubernetes.cri-o.TrustedSandbox
for untrusted Pods.
To eliminate the complexity of user configuration introduced by the non-standardized annotations and provide
extensibility, RuntimeClass
was introduced. This gives users the ability to affect the runtime behavior
through RuntimeClass
without the knowledge of the CRI daemons. We suggest that users with multiple runtimes
use RuntimeClass
instead of the deprecated annotations.
The containerd-shim-kata-v2
(short as shimv2
in this documentation)
implements the Containerd Runtime V2 (Shim API) for Kata.
With shimv2
, Kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to shimv2
, 2N+1
shims (i.e. a containerd-shim
and a kata-shim
for each container and the Pod sandbox itself) and no standalone kata-proxy
process were used, even with VSOCK not available.
The shim v2 is introduced in containerd v1.2.0 and Kata shimv2
is implemented in Kata Containers v1.5.0.
Follow the instructions to install Kata Containers.
Note:
cri
is a native plugin of containerd 1.1 and above. It is built into containerd and enabled by default. You do not need to installcri
if you have containerd 1.1 or above. Just remove thecri
plugin from the list ofdisabled_plugins
in the containerd configuration file (/etc/containerd/config.toml
).
Follow the instructions from the CRI installation guide.
Then, check if containerd
is now available:
$ command -v containerd
If you have installed Kubernetes with
kubeadm
, you might have already installed the CNI plugins.
You can manually install CNI plugins as follows:
$ git clone https://github.com/containernetworking/plugins.git
$ pushd plugins
$ ./build_linux.sh
$ sudo mkdir /opt/cni
$ sudo cp -r bin /opt/cni/
$ popd
Note:
cri-tools
is a set of tools for CRI used for development and testing. Users who only want to use containerd with Kubernetes can skip thecri-tools
.
You can install the cri-tools
from source code:
$ git clone https://github.com/kubernetes-sigs/cri-tools.git
$ pushd cri-tools
$ make
$ sudo -E make install
$ popd
By default, the configuration of containerd is located at /etc/containerd/config.toml
, and the
cri
plugins are placed in the following section:
[plugins]
[plugins.cri]
[plugins.cri.containerd]
[plugins.cri.containerd.default_runtime]
#runtime_type = "io.containerd.runtime.v1.linux"
[plugins.cri.cni]
# conf_dir is the directory in which the admin places a CNI conf.
conf_dir = "/etc/cni/net.d"
The following sections outline how to add Kata Containers to the configurations.
For
- Kata Containers v1.5.0 or above (including
1.5.0-rc
) - Containerd v1.2.0 or above
- Kubernetes v1.12.0 or above
The RuntimeClass
is suggested.
The following configuration includes two runtime classes:
plugins.cri.containerd.runtimes.runc
: the runc, and it is the default runtime.plugins.cri.containerd.runtimes.kata
: The function in containerd (reference the document here) where the dot-connected stringio.containerd.kata.v2
is translated tocontainerd-shim-kata-v2
(i.e. the binary name of the Kata implementation of Containerd Runtime V2 (Shim API)).
[plugins.cri.containerd]
no_pivot = false
[plugins.cri.containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
privileged_without_host_devices = false
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
privileged_without_host_devices = true
pod_annotations = ["io.katacontainers.*"]
container_annotations = ["io.katacontainers.*"]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"
privileged_without_host_devices
tells containerd that a privileged Kata container should not have direct access to all host devices. If unset, containerd will pass all host devices to Kata container, which may cause security issues.
pod_annotations
is the list of pod annotations passed to both the pod sandbox as well as container through the OCI config.
container_annotations
is the list of container annotations passed through to the OCI config of the containers.
This ConfigPath
option is optional. If you do not specify it, shimv2 first tries to get the configuration file from the environment variable KATA_CONF_FILE
. If neither are set, shimv2 will use the default Kata configuration file paths (/etc/kata-containers/configuration.toml
and /usr/share/defaults/kata-containers/configuration.toml
).
For cases without RuntimeClass
support, we can use the legacy annotation method to support using Kata Containers
for an untrusted workload. With the following configuration, you can run trusted workloads with a runtime such as runc
and then, run an untrusted workload with Kata Containers:
[plugins.cri.containerd]
# "plugins.cri.containerd.default_runtime" is the runtime to use in containerd.
[plugins.cri.containerd.default_runtime]
# runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
runtime_type = "io.containerd.runtime.v1.linux"
# "plugins.cri.containerd.untrusted_workload_runtime" is a runtime to run untrusted workloads on it.
[plugins.cri.containerd.untrusted_workload_runtime]
# runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
runtime_type = "io.containerd.kata.v2"
You can find more information on the Containerd config documentation
If you want to set Kata Containers as the only runtime in the deployment, you can simply configure as follows:
[plugins.cri.containerd]
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.kata.v2"
Note: If you skipped the Install
cri-tools
section, you can skip this section too.
First, add the CNI configuration in the containerd configuration.
The following is the configuration if you installed CNI as the Install CNI plugins section outlined.
Put the CNI configuration as /etc/cni/net.d/10-mynet.conf
:
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "172.19.0.0/24",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
Next, reference the configuration directory through containerd config.toml
:
[plugins.cri.cni]
# conf_dir is the directory in which the admin places a CNI conf.
conf_dir = "/etc/cni/net.d"
The configuration file of crictl
command line tool in cri-tools
locates at /etc/crictl.yaml
:
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
To run a container with Kata Containers through the containerd command line, you can run the following:
$ sudo ctr image pull docker.io/library/busybox:latest
$ sudo ctr run --cni --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh
This launches a BusyBox container named hello
, and it will be removed by --rm
after it quits.
The --cni
flag enables CNI networking for the container. Without this flag, a container with just a
loopback interface is created.
Use the script to create rootfs
ctr i pull quay.io/prometheus/busybox:latest
ctr i export rootfs.tar quay.io/prometheus/busybox:latest
rootfs_tar=rootfs.tar
bundle_dir="./bundle"
mkdir -p "${bundle_dir}"
# extract busybox rootfs
rootfs_dir="${bundle_dir}/rootfs"
mkdir -p "${rootfs_dir}"
layers_dir="$(mktemp -d)"
tar -C "${layers_dir}" -pxf "${rootfs_tar}"
for ((i=0;i<$(cat ${layers_dir}/manifest.json | jq -r ".[].Layers | length");i++)); do
tar -C ${rootfs_dir} -xf ${layers_dir}/$(cat ${layers_dir}/manifest.json | jq -r ".[].Layers[${i}]")
done
Use runc spec to generate config.json
cd ./bundle/rootfs
runc spec
mv config.json ../
Change the root path
in config.json
to the absolute path of rootfs
"root":{
"path":"/root/test/bundle/rootfs",
"readonly": false
},
sudo ctr run -d --runtime io.containerd.run.kata.v2 --config bundle/config.json hello
sudo ctr t exec --exec-id ${ID} -t hello sh
With the crictl
command line of cri-tools
, you can specify runtime class with -r
or --runtime
flag.
Use the following to launch Pod with kata
runtime class with the pod in the example
of cri-tools
:
$ sudo crictl runp -r kata podsandbox-config.yaml
36e23521e8f89fabd9044924c9aeb34890c60e85e1748e8daca7e2e673f8653e
You can add container to the launched Pod with the following:
$ sudo crictl create 36e23521e8f89 container-config.yaml podsandbox-config.yaml
1aab7585530e62c446734f12f6899f095ce53422dafcf5a80055ba11b95f2da7
Now, start it with the following:
$ sudo crictl start 1aab7585530e6
1aab7585530e6
In Kubernetes, you need to create a RuntimeClass
resource and add the RuntimeClass
field in the Pod Spec
(see this document for more information).
If RuntimeClass
is not supported, you can use the following annotation in a Kubernetes pod to identify as an untrusted workload:
annotations:
io.kubernetes.cri.untrusted-workload: "true"