Skip to content

Commit

Permalink
Merge pull request #1551 from aimeeu/get-started
Browse files Browse the repository at this point in the history
  • Loading branch information
Thomas Kosiewski authored Feb 29, 2024
2 parents de69d08 + 42d4688 commit a3c7085
Show file tree
Hide file tree
Showing 8 changed files with 271 additions and 178 deletions.
2 changes: 2 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Execute these commands from the `/docs/` directory.

### Installation

From the `vcluster/docs` directory, execute:

```
$ yarn
```
Expand Down
58 changes: 0 additions & 58 deletions docs/pages/fragments/deploy-vcluster.mdx

This file was deleted.

11 changes: 3 additions & 8 deletions docs/pages/fragments/install/cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ import TabItem from "@theme/TabItem";
{ label: 'Mac (Silicon/ARM)', value: 'mac-arm', },
{ label: 'Linux (AMD)', value: 'linux', },
{ label: 'Linux (ARM)', value: 'linux-arm', },
{ label: 'Homebrew Installation', value: 'homebrew', },
{ label: 'Windows Powershell', value: 'windows', },
]
}>
Expand All @@ -20,6 +19,9 @@ import TabItem from "@theme/TabItem";
brew install loft-sh/tap/vcluster
```

If you installed the CLI using `brew install vcluster`, you should `brew uninstall vcluster` and then install using the tap.
The binaries in the tap are signed using the [Sigstore](https://docs.sigstore.dev/) framework for enhanced security.

</TabItem>
<TabItem value="mac">

Expand Down Expand Up @@ -48,13 +50,6 @@ curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/downloa
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
```

</TabItem>
<TabItem value="homebrew">

```bash
brew install vcluster
```

</TabItem>
<TabItem value="windows">

Expand Down
4 changes: 2 additions & 2 deletions docs/pages/getting-started/cleanup.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Delete vClusters
title: Delete vCluster
sidebar_label: 4. Cleanup
---

Expand All @@ -8,5 +8,5 @@ import DeleteFragment from '../fragments/delete-vcluster.mdx'
<DeleteFragment/>

:::caution Resources inside vClusters
Deleting a vCluster will also delete all objects within and all states related to the vCluster.
Deleting a vCluster also delete all objects within and all states related to the vCluster.
:::
190 changes: 105 additions & 85 deletions docs/pages/getting-started/connect.mdx
Original file line number Diff line number Diff line change
@@ -1,117 +1,137 @@
---
title: Connect to and use vCluster
title: Connect to and Use vCluster
sidebar_label: 3. Use vCluster
---

Now that we deployed a vCluster, let's connect to it, run a couple of `kubectl` commands inside of it and then understand what happens behind the scenes inside our vCluster's host namespace that is part of the underlying host cluster.
## Learning objectives

## Connection to the vCluster
1. [Connect to your vCluster instance](#connect-to-your-vcluster).
1. [Run some `kubectl` commands inside of it](#run-kubectl-commands).
1. [Learn what happens](#what-happens-in-the-host-cluster) behind the scenes inside your vCluster's host namespace, which is part of the underlying host cluster.

By default, vCluster CLI will connect to the virtual cluster either directly (on local Kubernetes distributions) or via port-forwarding for remote clusters.
## Connect to your vCluster

If you want to use vCluster without port-forwarding, you can take a look at [other supported exposing methods](../using-vclusters/access.mdx).
To connect to your cluster, run `vcluster connect my-cluster`. Output is similar to:

```bash
done Switched active kube context to vcluster_my-cluster
- Use `vcluster disconnect` to return to your previous kube context
```

By default, the vCluster CLI connects to the virtual cluster either directly (on local Kubernetes distributions) or via port-forwarding for remote clusters. If you want to use vCluster on remote clusters without port-forwarding, you can take a look at [other supported exposing methods](../using-vclusters/access.mdx).

## Run kubectl commands

A virtual cluster behaves the same way as a regular Kubernetes cluster. That means you can run any `kubectl` command and since you are admin of this vCluster, you can even run commands like these:
A virtual cluster behaves the same way as a regular Kubernetes cluster. That means you can run any `kubectl` command. Since you are admin of this vCluster, you can even run commands like these:

```bash
kubectl get namespace
kubectl get pods -n kube-system
```

Let's create a namespace and a demo nginx deployment to understand how vClusters work:
## What happens in the host cluster

To illustrate what happens in the host cluster, create a namespace and deploy NGINX:

```bash
kubectl create namespace demo-nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
kubectl create deployment nginx-deployment -n demo-nginx --image=nginx -r 2
```

You can check that this demo deployment will create pods inside the vCluster:
Check that this deployment creates 2 pods inside the virtual cluster:

```bash
kubectl get pods -n demo-nginx
```

## What happens in the host cluster?
The first thing to understand is that **most** resources inside your vCluster will only exist in your vCluster and **not** make it to the underlying host cluster / host namespace.
Output is similar to:

### 1. Use Host Cluster Kube-Context
Let's verify this and switch our kube-context back to the host cluster:
```bash
# Switch back to the host cluster
vcluster disconnect
NAME READY STATUS RESTARTS AGE
nginx-deployment-6d6565499c-2wfrd 1/1 Running 0 9s
nginx-deployment-6d6565499c-2blwr 1/1 Running 0 9s
```

### 2. Check Namespaces
Now, let's check the namespaces in our host cluster:
```bash
kubectl get namespaces
```
```bash {3}
NAME STATUS AGE
default Active 11d
vcluster-my-vcluster Active 9m17s
kube-node-lease Active 11d
kube-public Active 11d
kube-system Active 11d
```
You will notice that there is **no namespace `demo-nginx`** because this namespace only exists inside the vCluster. Everything that belongs to the vCluster will always remain inside the vCluster's host namespace `vcluster-my-vcluster`.
**Most** resources inside your virtual cluster only exist in your virtual cluster and **not** in the underlying host cluster / host namespace.

### 3. Check Deployments
So, let's check to see if our deployment `nginx-deployment` has made it to the underlying host cluster:
```bash
kubectl get deployments -n vcluster-my-vcluster
```
```bash
No resources found in vCluster-my-vcluster namespace.
```
To verify this, perform these steps:

1. Switch back to the host context.

```bash
vcluster disconnect
```

1. Check namespaces in the host cluster.

```bash
kubectl get namespaces
```

Output is similar to:

```bash {3}
NAME STATUS AGE
default Active 11d
vcluster-my-vcluster Active 9m17s
kube-node-lease Active 11d
kube-public Active 11d
kube-system Active 11d
```

Notice that there is **no namespace `demo-nginx`** because this namespace only exists inside the virtual cluster.

Everything that belongs to the virtual cluster always remains inside the vCluster's `vcluster-my-vcluster` namespace.

1. Look for the NGINX deployment.

Check to see if your deployment `nginx-deployment` is in the underlying host cluster.

```bash
kubectl get deployments -n vcluster-my-cluster
```

Output is similar to:

```bash
No resources found in vcluster-my-cluster namespace.
```

You see that there is no deployment `nginx-deployment` because that deployment only lives inside the virtual cluster.

1. Look for the NGINX pods.

The last thing to check is pods running inside the virtual cluster namespace:

```bash
kubectl get pods -n vcluster-my-cluster
```

Output is similar to:

```bash {4-5}
NAME READY STATUS RESTARTS AGE
coredns-68bdd584b4-9n8c4-x-kube-system-x-my-cluster 1/1 Running 0 129m
my-cluster-0 1/1 Running 0 129m
nginx-deployment-6d6565499c-2blwr-x-demo-nginx-x-my-cluster 1/1 Running 0 7m25s
nginx-deployment-6d6565499c-2wfrd-x-demo-nginx-x-my-cluster 1/1 Running 0 7m25s
```

:::info Renaming
As you see in lines 4-5 of the output, the pod name is rewritten during the sync process since vCluster is mapping pods from namespaces inside the virtual cluster into one single host namespace in the underlying host cluster.
:::

The vCluster `my-cluster-0` pod contains the virtual cluster’s API server and some additional tools. There’s also a CoreDNS pod, which vCluster uses, and the two NGINX pods.

The host cluster has the `nginx-deployment` pods because the virtual cluster **does not** have separate nodes or a scheduler. Instead, the virtual cluster has a _syncer_ that synchronizes resources from the virtual cluster to the underlying host namespace.
The vCluster syncer process tells the underlying cluster to schedule workloads. This syncer process communicates with the API server of the host cluster to schedule the pods and keep track of their state.
To prevent collisions, vCluster appends the name of the virtual cluster namespace the pods are running in and the name of the virtual cluster.

Only very few resources and API server requests actually reach the underlying Kubernetes API server. Only workload-related resources (e.g. Pod) and networking-related resources (e.g. Service) need to be synchronized down to the host cluster since the vCluster does **not** have any nodes or network itself.

The state of most objects running in the virtual cluster is stored in a database inside it. vCluster uses SQLite by default for that DB, but it can also use etcd or a few other options like PostgreSQL. But pods are scheduled in the host cluster.

You will see that there is **no deployment `nginx-deployment`** because it also just lives inside the virtual cluster.

### 4. Check Pods
The last thing to check is pods:
```bash
kubectl get pods -n vcluster-my-vcluster
```

```bash {3}
NAME READY STATUS RESTARTS AGE
coredns-66c464876b-p275l-x-kube-system-x-my-vcluster 1/1 Running 0 14m
nginx-deployment-84cd76b964-mnvzz-x-demo-nginx-x-my-vcluster 1/1 Running 0 10m
my-vcluster-0 2/2 Running 0 14m
```

And there it is! The pod that has been scheduled for our `nginx-deployment` has actually made it to the underlying host cluster.

The reason for this is that vClusters do **not** have separate nodes\*. Instead, they have a **syncer** which synchronizes resources from the vCluster to the underlying host namespace to actually get the pods of the vCluster running on the host cluster's nodes and the containers started inside the underlying host namespace.

:::info Renaming
As you can see above in line 3, the names of pods get rewritten during the sync process since we are mapping pods from X namespaces inside the vCluster into one single host namespace in the underlying host cluster.
:::

## Benefits of Virtual Clusters
Virtual clusters provide immense benefits for large-scale Kubernetes deployments and multi-tenancy:
- **Full Admin Access**:
- Deploy operators with CRDs, create namespaces and other cluster scoped resources that's usually not possible in namespaces.
- Taint and label nodes without any influence on the host cluster.
- Reuse and share services across multiple virtual clusters with ease.
- **Cost Savings:**
- You can create lightweight vClusters that share the underlying host cluster instead of creating separate "real" clusters.
- vClusters are just deployments, so they can be easily auto-scaled, purged, snapshotted and moved.
- **Low Overhead:**
- vClusters are super lightweight and only reside in a single namespace.
- vClusters run with k3s, a super low-footprint k8s distribution, but they can also run with "real" k8s.
- The control plane of a vCluster runs inside a single pod (+1 CoreDNS pod for vCluster-internal DNS capabilities).
- **<u>No</u> Network Degradation:**
- Since the pods and services inside a vCluster are actually being synchronized down to the host cluster\*, they are effectively using the underlying cluster's pod and service networking and are therefore not a bit slower than any other pods in the underlying host cluster.
- **API Server Compatibility:**
- vClusters run with the k3s API server which is certified k8s distro which ensures 100% Kubernetes API server compliance.
- vCluster have their own API server, controller-manager and a separate, isolated data store (sqlite for easiest option but this is configurable, you can also deploy a full-blown etcd if needed).
- **Security:**
- vCluster users need much fewer permissions in the underlying host cluster / host namespace.
- vCluster users can manage their own CRDs independently and can even mess with RBAC inside their own vClusters.
- vClusters provide an extra layer of isolation because each vCluster has its own API server and control plane (much fewer requests to the underlying cluster that need to be secured\*).
- **Scalability:**
- Less pressure / fewer requests on the k8s API server in large-scale cluster\*
- Higher scalability of cluster via cluster sharding / API server sharding into smaller vClusters
- No need for cluster admins to worry about conflicting CRDs or CRD versions with growing number of users and deployments

\* Only very few resources and API server requests actually reach the underlying Kubernetes API server. Only workload-related resources (e.g. Pod) and networking-related resources (e.g. Service) need to be synchronized down to the host cluster since the vCluster does **not** have any nodes or network itself.

Loading

0 comments on commit a3c7085

Please sign in to comment.