-
Notifications
You must be signed in to change notification settings - Fork 11
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Major surgery on the how-to guides. Adds a lot more color as to what'…
…s actually happening, corrects some mistakes and fixes the controller setup docs.
- Loading branch information
1 parent
327b983
commit 5aa334e
Showing
8 changed files
with
393 additions
and
305 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,4 @@ | ||
--- | ||
title: How To | ||
description: How To Guides for Deploying and Managing Plural | ||
description: How To Guides for Getting the Most Out of Plural | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,119 +1,291 @@ | ||
--- | ||
title: Setting Up a Controller | ||
description: Adding Controllers to Clusters | ||
title: Setting Up Ingress on a Cluster | ||
description: Setting up your edge stack on a cluster | ||
--- | ||
|
||
# Prerequisites | ||
* **Plural Console `admin` permissions** | ||
* **`kubectl` cluster access** | ||
# Setting the stage | ||
|
||
A very common problem a user will need to solve when setting up a new cluster is getting edge networking set up. This includes solving a few main concerns: | ||
|
||
# Set Up | ||
* Ingress - this sets up an ingress controller for load balancing incoming HTTP requests into the microservices on your cluster | ||
* DNS registration - you can use externaldns to automate listening for new hostname registrations in ingress resources and registering them with standard DNS services like route53 | ||
* SSL Cert Management - the standard K8s approach to this is using Cert Manager. | ||
|
||
### Example: Ingress NGINX | ||
* **Add a new `HelmRepository` CRD to your _infra_ repo `./apps/repositories`** | ||
* Example: `ingress-nginx.yaml` | ||
```yaml | ||
apiVersion: source.toolkit.fluxcd.io/v1beta2 | ||
kind: HelmRepository | ||
metadata: | ||
name: ingress-nginx | ||
spec: | ||
interval: 5m0s | ||
url: https://kubernetes.github.io/ingress-nginx | ||
``` | ||
* **Commit and Push the changes** | ||
* **Navigate to `https://console.[YOUR DOMAIN].onplural.sh/cd/services`** | ||
* Click on the `helm-repositories` Service | ||
* You should see your newly added Helm Repository | ||
`plural up` gets you 90% of the way there out of the box, you'll just need to configure a few basic things. We also provide a consolidated `runtime` chart that makes installing these in one swoop much easier, but you can also mix-and-match from the CNCF ecosystem as well based on your organizations requirements and preferences. | ||
|
||
{% callout severity="info" %} | ||
Throughout this guide, we'll recommend you write yaml files to the bootstrap folder or elsewhere. This is because `plural up` by default creates a service-of-services syncing resources under the `bootstrap` folder on commit to your main branch. If you setup w/o `plural up` or renamed this, then it will need to be translated to that convention. | ||
|
||
*The changes will also not apply until these files are `git push`'ed or merged to your main branch via a PR* | ||
{% /callout %} | ||
|
||
## Setting Up The Runtime Chart | ||
|
||
We're going to use our runtime chart for now, but the technique can generalize to any other helm chart as well. First, let's create a global service for the runtime chart. This will ensure it's installed on all clusters with a common tagset. Writing this to `bootstrap/components/runtime.yaml` | ||
|
||
{% callout severity="info" %} | ||
The global services will all be written to a subfolder of `bootstrap`. This is because `plural up` initializes a bootstrap service-of-services under that folder and so we can guarantee any file written there will be synced. Sets of configuration that should be deployed independently and not to the mgmt cluster ought to live in their own folder structure, which we typically put under `services/**`. | ||
{% /callout %} | ||
|
||
* **Add a new [`ServiceDeployment`](https://docs.plural.sh/deployments/operator/api#servicedeployment) CRD YAML to `./apps/services`** | ||
* Example: `ingress-nginx.yaml` | ||
```yaml | ||
apiVersion: deployments.plural.sh/v1alpha1 | ||
kind: ServiceDeployment | ||
metadata: | ||
name: ingress-nginx | ||
name: plrl-runtime | ||
namespace: infra | ||
spec: | ||
name: ingress-nginx | ||
namespace: ingress-nginx | ||
helm: | ||
version: 4.4.x | ||
chart: ingress-nginx | ||
values: | ||
# in-line helm values, will be stored encrypted at rest | ||
controller: | ||
image: | ||
digest: null | ||
digestChroot: null | ||
admissionWebhooks: | ||
enabled: false | ||
repository: | ||
tags: | ||
role: workload | ||
template: | ||
name: runtime | ||
namespace: plural-runtime # note this for later | ||
git: | ||
ref: main | ||
folder: helm | ||
repositoryRef: | ||
name: infra # this should point to your `plural up` repo | ||
namespace: infra | ||
name: ingress-nginx # referenced helm repository above | ||
clusterRef: | ||
kind: Cluster | ||
name: plrl-how-to-workload-00-dev | ||
namespace: infra | ||
helm: | ||
version: 4.4.x | ||
chart: runtime | ||
url: https://pluralsh.github.io/bootstrap | ||
valuesFiles: | ||
- runtime.yaml.liquid | ||
``` | ||
* **Apply the CRD to the Cluster** | ||
* `k apply -f ./apps/services/ingress-nginx.yaml` | ||
* Notice how we apply the service CRD to the _MGMT_ cluster, but the application deploys on the workload cluster specified in the `clusterRef` | ||
Notice this is expecting a `helm/runtime.yaml.liquid` file. This would look something like: | ||
|
||
### Example: External DNS, on AWS | ||
* **Add a new `HelmRepository` CRD to your _infra_ repo `./apps/repositories`** | ||
* Example: `externaldns.yaml` | ||
```yaml | ||
apiVersion: source.toolkit.fluxcd.io/v1beta2 | ||
kind: HelmRepository | ||
metadata: | ||
name: external-dns | ||
spec: | ||
interval: 5m0s | ||
url: https://kubernetes-sigs.github.io/external-dns | ||
plural-certmanager-webhook: | ||
enabled: false | ||
ownerEmail: <your-email> | ||
external-dns: | ||
enabled: true | ||
logLevel: debug | ||
provider: aws | ||
txtOwnerId: plrl-{{ cluster.handle }} # templating in the cluster handle, which is unqiue, to be the externaldns owner id | ||
policy: sync | ||
domainFilters: | ||
- {{ cluster.metadata.dns_zone }} # check terraform/modules/clusters/aws/plural.tf for where this is set | ||
serviceAccount: | ||
annotations: | ||
eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.external_dns }} # check terraform/modules/clusters/aws/plural.tf for where this is set | ||
``` | ||
* **Add `./terraform/modules/clusters/aws/external-dns.tf`** | ||
```sh | ||
module "external-dns_irsa_role" { | ||
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" | ||
|
||
You'll also want to modify `terraform/modules/clusters/aws/plural.tf` to add the `dns_zone to the metadata, it will look something like this: | ||
|
||
```tf | ||
local { | ||
dns_zone = <my-dns-zone> # you might also want to register this in the module, or can register it elsewhere | ||
} | ||
resource "plural_cluster" "this" { | ||
handle = var.cluster | ||
name = var.cluster | ||
tags = merge({ | ||
role = "workload" # add this to allow for global services to target only workload clusters | ||
tier = var.tier | ||
region = var.region | ||
}) | ||
metadata = jsonencode({ | ||
dns_zone = local.dns_zone | ||
iam = { | ||
load_balancer = module.addons.gitops_metadata.aws_load_balancer_controller_iam_role_arn | ||
cluster_autoscaler = module.addons.gitops_metadata.cluster_autoscaler_iam_role_arn | ||
external_dns = module.externaldns_irsa_role.iam_role_arn | ||
cert_manager = module.externaldns_irsa_role.iam_role_arn | ||
} | ||
}) | ||
kubeconfig = { | ||
host = module.eks.cluster_endpoint | ||
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) | ||
token = data.aws_eks_cluster_auth.cluster.token | ||
} | ||
} | ||
``` | ||
|
||
Notice also, we've already prebuilt the iam policies for external-dns for you in `terraform/modules/clusters/aws/addons.tf`. If you want another add-on, you can easily imitate that pattern, or you're free to tune our defaults as well. The terraform that does it looks like this: | ||
|
||
```tf | ||
module "externaldns_irsa_role" { | ||
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" | ||
version = "~> 5.33" | ||
role_name = "${module.eks.cluster_name}-external-dns" | ||
role_name = "${module.eks.cluster_name}-externaldns" | ||
attach_external_dns_policy = true | ||
attach_cert_manager_policy = true | ||
oidc_providers = { | ||
main = { | ||
provider_arn = module.eks.oidc_provider_arn | ||
namespace_service_accounts = ["${var.external-dns-namespace}:external-dns"] | ||
namespace_service_accounts = [ | ||
"plural-runtime:external-dns", | ||
"external-dns:external-dns", | ||
"cert-manager:cert-manager" | ||
] | ||
} | ||
} | ||
} | ||
``` | ||
* And any additional variables to `./terraform/modules/clusters/aws/variables.tf` | ||
```sh | ||
variable "external-dns-namespace" { | ||
type = string | ||
default = "external-dns" | ||
} | ||
|
||
And uses some off-the-shelf tf modules they have created. It's output is then plumbed to the `plural_cluster.this` resource which allows the dynamic templating in the `runtime.yaml.liquid` file. In general, any file can add a `.liquid` extension, and our agent will dynamically template it. You can read more about that [here](/deployments/templating). | ||
|
||
## Setting Up AWS Load Balancer Controller (AWS only) | ||
|
||
EKS is very bare bones and doesn't ship with a fully featured load balancer controller by default. To get full support for provisioning NLBs and ALBs, you should deploy the load balancer controller to your cluster, which would be done with a global service at `bootstrap/components/aws-load-balancer.yaml`: | ||
|
||
```yaml | ||
apiVersion: deployments.plural.sh/v1alpha1 | ||
kind: GlobalService | ||
metadata: | ||
name: aws-load-balancer-controller | ||
namespace: infra | ||
spec: | ||
cascade: | ||
delete: true | ||
tags: | ||
role: workload | ||
template: | ||
name: aws-load-balancer-controller | ||
namespace: kube-system | ||
helm: | ||
version: "x.x.x" | ||
chart: aws-load-balancer-controller | ||
url: https://aws.github.io/eks-charts | ||
valuesFiles: | ||
- loadbalancer.yaml.liquid | ||
git: | ||
folder: helm | ||
ref: main | ||
repositoryRef: | ||
kind: GitRepository | ||
name: infra | ||
namespace: infra | ||
``` | ||
|
||
The `helm/loadbalancer.yaml.liquid` file would then be: | ||
|
||
```yaml | ||
clusterName: {{ cluster.handle }} | ||
createIngressClassResource: false | ||
deploymentAnnotations: | ||
client.lifecycle.config.k8s.io/deletion: detach | ||
serviceAccount: | ||
create: true | ||
name: aws-load-balancer-controller-sa | ||
annotations: | ||
client.lifecycle.config.k8s.io/deletion: detach | ||
eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.load_balancer }} # notice this is also from terraform/modules/aws/plural.tf | ||
``` | ||
* **Commit and Push the Changes** | ||
Adding the terraform in the `~/terraform/modules/cluster/aws` directory | ||
will ensure all AWS cluster, other than MGMT, will contain those configurations. | ||
The Cluster Creator Stack Run is configured to watch that directory and deploy any committed changes. | ||
|
||
* **Navigate to `https://console.[YOUR DOMAIN].onplural.sh/stacks`** to see the status of the run | ||
## Setting Up Cert-Manager | ||
|
||
Cert-Manager is an almost ubiquitous component in kubernetes and usually should be managed separately. We'll set it up, and also provision a Route53 dns01 issuer alongside it (write to `bootstrap/components/cert-manager.yaml`): | ||
|
||
# Troubleshooting | ||
#### Get Kubeconfig for the MGMT Cluster | ||
```sh | ||
plural wkspace kube-init | ||
```yaml | ||
apiVersion: deployments.plural.sh/v1alpha1 | ||
kind: GlobalService | ||
metadata: | ||
name: cert-manager | ||
namespace: infra | ||
spec: | ||
cascade: | ||
delete: true | ||
tags: | ||
role: workload | ||
template: | ||
repositoryRef: | ||
kind: GitRepository | ||
name: infra | ||
namespace: infra | ||
git: | ||
ref: main | ||
folder: helm | ||
helm: | ||
version: "v1.x.x" | ||
chart: cert-manager | ||
url: https://charts.jetstack.io | ||
valuesFiles: | ||
- certmanager.yaml.liquid | ||
``` | ||
|
||
Use `kubectl` with the newly added kube context | ||
The key namespaces to check are: | ||
* plrl-console | ||
* plrl-deploy-operator | ||
* plrl-runtime | ||
The values file needed here is and should be placed in `helm/certmanager.yaml.liquid`: | ||
|
||
```yaml | ||
installCRDs: true | ||
serviceAccount: | ||
name: cert-manager | ||
annotations: | ||
eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.cert_manager }} | ||
securityContext: | ||
fsGroup: 1000 | ||
runAsNonRoot: true | ||
``` | ||
|
||
{% callout severity="info" %} | ||
The `runtime` chart does provision a `letsencrypt-prod` issuer using the http01 protocol, which usually requires no additional configuration. It might be suitable for your usecase, in which case the following is unnecessary. We have noticed it's more prone to flaking vs dns01. Also it can only work on publicly accessible endpoints since it requires an HTTP call to a service hosted by your ingress. | ||
{% /callout %} | ||
|
||
This sets up IRSA auth to cert-manager to allow dns01 ACME cert validations to succeed using Route53. You'll then want to create another service to spawn the cluster issuer resources cert-manager uses, you can do this by adding a file in `services/cluster-issuer/clusterissuer.yaml`: | ||
|
||
```yaml | ||
apiVersion: cert-manager.io/v1 | ||
kind: ClusterIssuer | ||
metadata: | ||
name: dns01 | ||
spec: | ||
acme: | ||
server: https://acme-v02.api.letsencrypt.org/ | ||
email: <your email> # replace here | ||
privateKeySecretRef: | ||
name: letsencrypt-prod | ||
# ACME DNS-01 provider configurations to verify domain | ||
solvers: | ||
- dns01: | ||
route53: | ||
region: us-east-1 # or whatever region you're configured for | ||
``` | ||
|
||
And now you can add a final global service in `bootstrap/components/cluster-issuer.yaml`: | ||
|
||
```yaml | ||
apiVersion: deployments.plural.sh/v1alpha1 | ||
kind: GlobalService | ||
metadata: | ||
name: cluster-issuer | ||
namespace: infra | ||
spec: | ||
cascade: | ||
delete: true | ||
tags: | ||
role: workload | ||
template: | ||
name: cluster-issuer | ||
namespace: cert-manager | ||
repositoryRef: | ||
kind: GitRepository | ||
name: infra | ||
namespace: infra | ||
git: | ||
ref: main | ||
folder: services/cluster-issuer | ||
``` |
Oops, something went wrong.