diff --git a/pages/how-to/index.md b/pages/how-to/index.md index 2bb129e3..c2120333 100644 --- a/pages/how-to/index.md +++ b/pages/how-to/index.md @@ -1,4 +1,4 @@ --- title: How To -description: How To Guides for Deploying and Managing Plural +description: How To Guides for Getting the Most Out of Plural --- \ No newline at end of file diff --git a/pages/how-to/set-up/controllers.md b/pages/how-to/set-up/controllers.md index 73dc5598..09d1ce3d 100644 --- a/pages/how-to/set-up/controllers.md +++ b/pages/how-to/set-up/controllers.md @@ -1,119 +1,291 @@ --- -title: Setting Up a Controller -description: Adding Controllers to Clusters +title: Setting Up Ingress on a Cluster +description: Setting up your edge stack on a cluster --- -# Prerequisites -* **Plural Console `admin` permissions** -* **`kubectl` cluster access** +# Setting the stage +A very common problem a user will need to solve when setting up a new cluster is getting edge networking set up. This includes solving a few main concerns: -# Set Up +* Ingress - this sets up an ingress controller for load balancing incoming HTTP requests into the microservices on your cluster +* DNS registration - you can use externaldns to automate listening for new hostname registrations in ingress resources and registering them with standard DNS services like route53 +* SSL Cert Management - the standard K8s approach to this is using Cert Manager. -### Example: Ingress NGINX -* **Add a new `HelmRepository` CRD to your _infra_ repo `./apps/repositories`** - * Example: `ingress-nginx.yaml` -```yaml -apiVersion: source.toolkit.fluxcd.io/v1beta2 -kind: HelmRepository -metadata: - name: ingress-nginx -spec: - interval: 5m0s - url: https://kubernetes.github.io/ingress-nginx -``` -* **Commit and Push the changes** -* **Navigate to `https://console.[YOUR DOMAIN].onplural.sh/cd/services`** - * Click on the `helm-repositories` Service - * You should see your newly added Helm Repository +`plural up` gets you 90% of the way there out of the box, you'll just need to configure a few basic things. We also provide a consolidated `runtime` chart that makes installing these in one swoop much easier, but you can also mix-and-match from the CNCF ecosystem as well based on your organizations requirements and preferences. + +{% callout severity="info" %} +Throughout this guide, we'll recommend you write yaml files to the bootstrap folder or elsewhere. This is because `plural up` by default creates a service-of-services syncing resources under the `bootstrap` folder on commit to your main branch. If you setup w/o `plural up` or renamed this, then it will need to be translated to that convention. + +*The changes will also not apply until these files are `git push`'ed or merged to your main branch via a PR* +{% /callout %} + +## Setting Up The Runtime Chart + +We're going to use our runtime chart for now, but the technique can generalize to any other helm chart as well. First, let's create a global service for the runtime chart. This will ensure it's installed on all clusters with a common tagset. Writing this to `bootstrap/components/runtime.yaml` + +{% callout severity="info" %} +The global services will all be written to a subfolder of `bootstrap`. This is because `plural up` initializes a bootstrap service-of-services under that folder and so we can guarantee any file written there will be synced. Sets of configuration that should be deployed independently and not to the mgmt cluster ought to live in their own folder structure, which we typically put under `services/**`. +{% /callout %} -* **Add a new [`ServiceDeployment`](https://docs.plural.sh/deployments/operator/api#servicedeployment) CRD YAML to `./apps/services`** - * Example: `ingress-nginx.yaml` ```yaml apiVersion: deployments.plural.sh/v1alpha1 kind: ServiceDeployment metadata: - name: ingress-nginx + name: plrl-runtime namespace: infra spec: - name: ingress-nginx - namespace: ingress-nginx - helm: - version: 4.4.x - chart: ingress-nginx - values: - # in-line helm values, will be stored encrypted at rest - controller: - image: - digest: null - digestChroot: null - admissionWebhooks: - enabled: false - repository: + tags: + role: workload + template: + name: runtime + namespace: plural-runtime # note this for later + git: + ref: main + folder: helm + repositoryRef: + name: infra # this should point to your `plural up` repo namespace: infra - name: ingress-nginx # referenced helm repository above - clusterRef: - kind: Cluster - name: plrl-how-to-workload-00-dev - namespace: infra + helm: + version: 4.4.x + chart: runtime + url: https://pluralsh.github.io/bootstrap + valuesFiles: + - runtime.yaml.liquid ``` -* **Apply the CRD to the Cluster** - * `k apply -f ./apps/services/ingress-nginx.yaml` - * Notice how we apply the service CRD to the _MGMT_ cluster, but the application deploys on the workload cluster specified in the `clusterRef` +Notice this is expecting a `helm/runtime.yaml.liquid` file. This would look something like: -### Example: External DNS, on AWS -* **Add a new `HelmRepository` CRD to your _infra_ repo `./apps/repositories`** - * Example: `externaldns.yaml` ```yaml -apiVersion: source.toolkit.fluxcd.io/v1beta2 -kind: HelmRepository -metadata: - name: external-dns -spec: - interval: 5m0s - url: https://kubernetes-sigs.github.io/external-dns +plural-certmanager-webhook: + enabled: false + +ownerEmail: + +external-dns: + enabled: true + + logLevel: debug + + provider: aws + + txtOwnerId: plrl-{{ cluster.handle }} # templating in the cluster handle, which is unqiue, to be the externaldns owner id + + policy: sync + + domainFilters: + - {{ cluster.metadata.dns_zone }} # check terraform/modules/clusters/aws/plural.tf for where this is set + + serviceAccount: + annotations: + eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.external_dns }} # check terraform/modules/clusters/aws/plural.tf for where this is set ``` -* **Add `./terraform/modules/clusters/aws/external-dns.tf`** -```sh -module "external-dns_irsa_role" { - source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + +You'll also want to modify `terraform/modules/clusters/aws/plural.tf` to add the `dns_zone to the metadata, it will look something like this: + +```tf +local { + dns_zone = # you might also want to register this in the module, or can register it elsewhere +} + +resource "plural_cluster" "this" { + handle = var.cluster + name = var.cluster + + + tags = merge({ + role = "workload" # add this to allow for global services to target only workload clusters + tier = var.tier + region = var.region + }) + + metadata = jsonencode({ + dns_zone = local.dns_zone + + iam = { + load_balancer = module.addons.gitops_metadata.aws_load_balancer_controller_iam_role_arn + cluster_autoscaler = module.addons.gitops_metadata.cluster_autoscaler_iam_role_arn + external_dns = module.externaldns_irsa_role.iam_role_arn + cert_manager = module.externaldns_irsa_role.iam_role_arn + } + }) + + kubeconfig = { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + token = data.aws_eks_cluster_auth.cluster.token + } +} +``` + +Notice also, we've already prebuilt the iam policies for external-dns for you in `terraform/modules/clusters/aws/addons.tf`. If you want another add-on, you can easily imitate that pattern, or you're free to tune our defaults as well. The terraform that does it looks like this: + +```tf + +module "externaldns_irsa_role" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" version = "~> 5.33" - role_name = "${module.eks.cluster_name}-external-dns" + role_name = "${module.eks.cluster_name}-externaldns" attach_external_dns_policy = true + attach_cert_manager_policy = true oidc_providers = { main = { provider_arn = module.eks.oidc_provider_arn - namespace_service_accounts = ["${var.external-dns-namespace}:external-dns"] + namespace_service_accounts = [ + "plural-runtime:external-dns", + "external-dns:external-dns", + "cert-manager:cert-manager" + ] } } } ``` - * And any additional variables to `./terraform/modules/clusters/aws/variables.tf` -```sh -variable "external-dns-namespace" { - type = string - default = "external-dns" -} + +And uses some off-the-shelf tf modules they have created. It's output is then plumbed to the `plural_cluster.this` resource which allows the dynamic templating in the `runtime.yaml.liquid` file. In general, any file can add a `.liquid` extension, and our agent will dynamically template it. You can read more about that [here](/deployments/templating). + +## Setting Up AWS Load Balancer Controller (AWS only) + +EKS is very bare bones and doesn't ship with a fully featured load balancer controller by default. To get full support for provisioning NLBs and ALBs, you should deploy the load balancer controller to your cluster, which would be done with a global service at `bootstrap/components/aws-load-balancer.yaml`: + +```yaml +apiVersion: deployments.plural.sh/v1alpha1 +kind: GlobalService +metadata: + name: aws-load-balancer-controller + namespace: infra +spec: + cascade: + delete: true + tags: + role: workload + template: + name: aws-load-balancer-controller + namespace: kube-system + helm: + version: "x.x.x" + chart: aws-load-balancer-controller + url: https://aws.github.io/eks-charts + valuesFiles: + - loadbalancer.yaml.liquid + git: + folder: helm + ref: main + repositoryRef: + kind: GitRepository + name: infra + namespace: infra +``` + +The `helm/loadbalancer.yaml.liquid` file would then be: + +```yaml +clusterName: {{ cluster.handle }} +createIngressClassResource: false + +deploymentAnnotations: + client.lifecycle.config.k8s.io/deletion: detach + +serviceAccount: + create: true + name: aws-load-balancer-controller-sa + annotations: + client.lifecycle.config.k8s.io/deletion: detach + eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.load_balancer }} # notice this is also from terraform/modules/aws/plural.tf ``` -* **Commit and Push the Changes** -Adding the terraform in the `~/terraform/modules/cluster/aws` directory -will ensure all AWS cluster, other than MGMT, will contain those configurations. -The Cluster Creator Stack Run is configured to watch that directory and deploy any committed changes. -* **Navigate to `https://console.[YOUR DOMAIN].onplural.sh/stacks`** to see the status of the run +## Setting Up Cert-Manager +Cert-Manager is an almost ubiquitous component in kubernetes and usually should be managed separately. We'll set it up, and also provision a Route53 dns01 issuer alongside it (write to `bootstrap/components/cert-manager.yaml`): -# Troubleshooting -#### Get Kubeconfig for the MGMT Cluster -```sh -plural wkspace kube-init +```yaml +apiVersion: deployments.plural.sh/v1alpha1 +kind: GlobalService +metadata: + name: cert-manager + namespace: infra +spec: + cascade: + delete: true + tags: + role: workload + template: + repositoryRef: + kind: GitRepository + name: infra + namespace: infra + git: + ref: main + folder: helm + helm: + version: "v1.x.x" + chart: cert-manager + url: https://charts.jetstack.io + valuesFiles: + - certmanager.yaml.liquid ``` -Use `kubectl` with the newly added kube context -The key namespaces to check are: -* plrl-console -* plrl-deploy-operator -* plrl-runtime +The values file needed here is and should be placed in `helm/certmanager.yaml.liquid`: + +```yaml +installCRDs: true +serviceAccount: + name: cert-manager + annotations: + eks.amazonaws.com/role-arn: {{ cluster.metadata.iam.cert_manager }} + +securityContext: + fsGroup: 1000 + runAsNonRoot: true +``` + +{% callout severity="info" %} +The `runtime` chart does provision a `letsencrypt-prod` issuer using the http01 protocol, which usually requires no additional configuration. It might be suitable for your usecase, in which case the following is unnecessary. We have noticed it's more prone to flaking vs dns01. Also it can only work on publicly accessible endpoints since it requires an HTTP call to a service hosted by your ingress. +{% /callout %} + +This sets up IRSA auth to cert-manager to allow dns01 ACME cert validations to succeed using Route53. You'll then want to create another service to spawn the cluster issuer resources cert-manager uses, you can do this by adding a file in `services/cluster-issuer/clusterissuer.yaml`: + +```yaml +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: dns01 +spec: + acme: + server: https://acme-v02.api.letsencrypt.org/ + email: # replace here + + privateKeySecretRef: + name: letsencrypt-prod + + # ACME DNS-01 provider configurations to verify domain + solvers: + - dns01: + route53: + region: us-east-1 # or whatever region you're configured for +``` + +And now you can add a final global service in `bootstrap/components/cluster-issuer.yaml`: + +```yaml +apiVersion: deployments.plural.sh/v1alpha1 +kind: GlobalService +metadata: + name: cluster-issuer + namespace: infra +spec: + cascade: + delete: true + tags: + role: workload + template: + name: cluster-issuer + namespace: cert-manager + repositoryRef: + kind: GitRepository + name: infra + namespace: infra + git: + ref: main + folder: services/cluster-issuer +``` diff --git a/pages/how-to/set-up/mgmt-cluster.md b/pages/how-to/set-up/mgmt-cluster.md index d9762fe2..b9bcebeb 100644 --- a/pages/how-to/set-up/mgmt-cluster.md +++ b/pages/how-to/set-up/mgmt-cluster.md @@ -3,121 +3,47 @@ title: Setting Up a New Management (MGMT) Cluster description: Using Plural CLI to Deploy a Management (MGMT) Kubernetes Cluster --- -### Prerequisites -[Plural CLI](/getting-started/quickstart) +# Overview -### Setup Repo and Deploy Resources -Ensure your _[app.plural.sh](https://app.plural.sh/profile/me)_ User has `admin` permissions -Follow the onscreen prompts to setup the repo and deploy resources +Plural's architecture, described [here](/deployments/architecture) has two tiers: -* The **Plural** CLI will create a new repository in the current directory - * If there are permission related repository creation constraints - the repo can be cloned before running `plural` commands +* Management Cluster - a single management plane that will oversee the core responsibilities of fleet management: CD, terraform management, dashboarding, etc. +* Workload Cluster - a zoo of clusters you provision to run actual development and production workloads for your enterprise. -* Use the provided **Plural** DNS Services for the Management (MGMT) Cluster - * When providing a domain name provide the _canonical_ name, e.g. how-to-plrl.onplural.sh +To get started with Plural, you need to provision your management cluster. There are two paths for this: -```sh -plural login -plural up -``` +* Plural Cloud - a fully managed instance of the Plural Console. We offer bother shared infrastructure hosting with usage limits and lower cost, or dedicated hosting which is more secure and enterprise-ready +* Self-Hosted - deploy and manage yourself on your own cloud environment, and we've provided a seamless getting started experience with `plural up` to do this. -# Troubleshooting -### "Console failed to become ready" -Sometimes the DNS Resolution can take longer than the expected five minutes -It's also possible the console services take a bit longer to become ready -```sh -2024/07/29 12:31:03 Console failed to become ready after 5 minutes, you might want to inspect the resources in the plrl-console namespace -``` -In this instance the images in the _`plrl-console`_ namespace -were taking a bit longer to download and initialize. -Once the services were _up_ in the cli, I was able to access the console url - -### "Cannot list resources in the Kubernetes Dashboard" -![alt text](/images/how-to/k8s-dash-403.png) -This is expected and due to missing [RBAC Bindings](/deployments/dashboard#rbac) for the console users - -##### Creating an RBAC Service -* **Create an `rbac` dir in your MGMT repo -and add the desired [k8s yaml](https://github.com/pluralsh/documentation/blob/main/pages/deployments/dashboard.md)** -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: someones-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -# This will create a single binding for the someone@your.company.com user to the cluster-admin k8s role - - apiGroup: rbac.authorization.k8s.io - kind: User - name: someone@your.company.com -# The following will bind the role to everyone in the `sre` group - - apiGroup - kind: Group - name: sre -``` +## Plural Cloud (easiest) -* **In the `./apps/services` dir in your MGMT repo** - * Add a Service Deployment CRD - This will create a service to sync the rbac bindings -```yaml -apiVersion: deployments.plural.sh/v1alpha1 -kind: ServiceDeployment -metadata: - name: rbac -spec: - clusterRef: - kind: Cluster - name: mgmt - namespace: infra - namespace: plrl-rbac - git: - folder: rbac - ref: main - repositoryRef: - kind: GitRepository - name: infra # can point to any git repository CRD - namespace: infra -``` -* **Commit and push your changes** -* **Apply the Service CRD to the MGMT Cluster** -`kubectl apply -f ./services/rbac.yaml` - -#### (Optionally) Make the RBAC Service Global -**ℹ️ Services created with the Console UI need to have the CRDs applied manually** -* **Navigate to `https://{your-console-domain}/cd/globalservices`** - -* **Click the `New Global Service` button** - * Service Name: Name of the Existing Service - * (Optionally) Add Cluster Tags - * Select the Cloud Provider Distributions to Propagate the changes -* **Click `Continue`** -* **Copy and Modify the Generated YAML** -```yaml -apiVersion: deployments.plural.sh/v1alpha1 -kind: GlobalService -metadata: - name: global-rbac - namespace: infra -spec: - serviceRef: - name: rbac # ⬅️ We need to update this with the service we created for rbac - namespace: infra -``` -* **(Optionally) Save the Global Service YAML** - * Saving the global service yaml is not required once it is applied to the cluster - * I keep the applied yaml in `services/global-rbac.yaml` for reference +Plural Cloud is a fully managed solution for provisioning Plural's core management software. It will host the Plural Console, alongside its git cache, underlying postgres database, and the kubernetes-agent-server api. To get started, create an account on https://app.plural.sh and go through the process for setting up your Plural Cloud instance. + +There are two options, `shared` and `dedicated`. +* A `shared` instance can be created on a free trial but has a hard cap on 10 clusters to use to avoid overloading other tenants. +* `dedicated` cloud instances get a dedicated k8s cluster and database, and are built to scale effectively infinitely. To use a `dedicated` instance, an enterprise plan is required, so please contact sales and we can get you set up as quickly as possible if that fits your use-case. + +The UI should guide you through the entire process, once your console is up, you'll be greated with a modal explaining how to finalize the onboarding. You'll need to still create a small management cluster in your cloud to host the Plural operator and any cloud-specific secrets. This is to ensure your cloud is fully secured and allow you to use Plural Cloud without exchanging root-level cloud permissions. You'll do that by simply running: -### As a Last Resort, Use `kubectl` ```sh -plural wkspace kube-init +plural up --cloud ``` -Use `kubectl` with the newly added kube context -The key namespaces to check are: -* plrl-console -* plrl-deploy-operator -* plrl-runtime \ No newline at end of file +Since it doesn't require setup of ingress controllers, SSL certs, etc, it's usually a very repeatable process. + +{% callout severity="info" %} +Another benefit of the `plural up` command is it bootstraps an entire GitOps repo for you, making it much easier to get started with production-ready infrastructure than having to hand-code it all yourself +{% /callout %} + +## `plural up` (still pretty easy) + +`plural up` is a single command to spawn our management cluster from zero in any of the big three clouds (AWS, Azure, GCP). We have docs thoroughly going over the process to use it [here](/deployments/cli-quickstart). + +There are a few reasons you'd consider using this over Plural Cloud: + +* Security - you want to ensure Plural hosts absolutely no cloud-related permissions. You can even follow our [sandboxing guide](/deployments/sandboxing) to remove all egress to Plural (this requires an enterprise license key) +* Networking - you want to host the Plural Console on a private network entirely. Plural Cloud currently is always publicly hosted. +* Integration - Oftentimes resources needed by Plural are themselves hosted on private networks, for instance Git Repositories. In that case, it's logistically easier to self-host and place it in an integrated network. +* Scaling - you want complete control as to how Plural Scales for your enterprise. `dedicated` cloud hosting does this perfectly well too, but some orgs want their own hands on the wheel. + +Plural is meant to be architecturally simple and efficient. Most organizations that do chose to self-host are shocked at how streamlined managing it is, especially compared to some more bloated CNCF projects, so it is a surprisingly viable way to manage the software if that is what your organization desires. \ No newline at end of file diff --git a/pages/how-to/set-up/plural-cli.md b/pages/how-to/set-up/plural-cli.md deleted file mode 100644 index a37e8329..00000000 --- a/pages/how-to/set-up/plural-cli.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Installing the Plural CLI -description: Guides for installing the Plural CLI ---- - -### Install Prerequisites - -[Mac Homebrew](https://brew.sh/) -```sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -brew update -``` - [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html), [Helm CLI](https://helm.sh/), [Terraform](https://developer.hashicorp.com/terraform/intro), [kubectl](https://kubernetes.io/docs/reference/kubectl/) -```sh -brew install awscli helm terraform kubectl -``` - -### Install [Plural CLI](https://github.com/pluralsh/plural-cli/?tab=readme-ov-file#installation) -```sh -brew install pluralsh/plural/plural -``` - -### Validate Install -```sh -plural login -``` - - diff --git a/pages/how-to/set-up/plural-console.md b/pages/how-to/set-up/plural-console.md deleted file mode 100644 index 1a45b1c1..00000000 --- a/pages/how-to/set-up/plural-console.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Setting Up Plural Console -description: How to Deploy the Plural Console to a MGMT Cluster ---- - -### Prerequisites -[Plural CLI](/how-to/set-up/plural-cli) - -### Deploy Plural Console -The `plural cd control-plane` command creates the _`values.secret.yaml`_ -and we use `helm` to apply them to the cluster -```sh -plural login -# Note: If you deployed using bootstrap terraform you can get the PSQL connection string from running: terraform output --json -plural cd control-plane -helm repo add plrl-console https://pluralsh.github.io/console -helm upgrade --install --create-namespace -f values.secret.yaml console plrl-console/console -n plrl-console -``` - diff --git a/pages/how-to/set-up/scm-connection.md b/pages/how-to/set-up/scm-connection.md index b5b6ac1a..c3b5fb40 100644 --- a/pages/how-to/set-up/scm-connection.md +++ b/pages/how-to/set-up/scm-connection.md @@ -1,9 +1,21 @@ --- -title: Setting Up an SCM Connection -description: Connecting Plural to a Source Control Management Provider +title: Integrate with your Source Control Provider +description: Set Up a SCM Connection in Plural to integrate with Github, GitLab, or BitBucket --- +# Overview + +Plural has the ability to spawn pull requests, post review comments, and other functionality to integrate with your standard SCM (Source Control Management) workflows. The most impactful of these is the PR Automation api, which allows you to spawn templated pull requests to do common tasks like: + +* provision new clusters +* drive deployment pipelines +* in conjunction with Stacks, provision associated kubernetes or cloud infrastructure (databases, IAM, networks) + +To get this working you'll first need to give your Plural Console scoped access tokens to your SCM. + # Prerequisites +Some things you'll need to run this tutorial: + * **Plural Console `admin` Permissions** * **SCM Provider Personal Access Token** * [Github](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic) @@ -13,25 +25,31 @@ description: Connecting Plural to a Source Control Management Provider * This is only required when creating the webhook * The workload cluster can still be created without the SCM webhook -# Set Up -### Create a New SCM Connection +## Create a New SCM Connection + +Creating an SCM Connection is easiest just using our UI. You can then reference that created resource via k8s CRD to drive other workflows. Step by step, you'll want to: + * **Navigate to `https://{your-console-domain}/pr/scm`** * **Click the _Create Connection_ Button at the Top Right** -![Create SCM Connection Button](/images/how-to/console_create-scm-btn.png) - * **Fil in the Required Fields** * **Provider Type**: The SCM Provider Hosting Git Repositories * **Name**: Reference Name for the Provider * ℹ️ **NOTE**: The _cluster-creator_ PR Automation looks for `github` by default, but is arbitrary and can be changed - * **Token**: The Deploy Token to use + * **Token**: The Deploy Token to use. +* **Click `Create`** ![Create SCM Connection Modal](/images/how-to/console_create-scm-modal.png) -### **Create an [`ScmConnection`](/deployments/operator/api#scmconnection) CRD Instance** +{% callout severity="info" %} +For `github` tokens, the minimal permissions needed will be `repo` permissions. You can also use a Github App if you'd like the token to not be scoped to a specific user. For Github Apps, recommended permisions are read/write for Code, Pull Requests and Workflows (so you can write PR Automations for Github Actions, but technically optional). +{% /callout %} + +## **Create an [`ScmConnection`](/deployments/operator/api#scmconnection) CRD Instance** + Once the connection is created in the UI we can reference it with a CRD instance * ❕ Ensure the Name Provided in the UI matches the `spec.name` in the CRD Exactly - * An [`ScmConnection`](/deployments/operator/api#scmconnection) yaml template for GitHub exists in `./app/services/pr-automation/scm.yaml` - * Use `kubectl` to apply it to the MGMT cluster + * An [`ScmConnection`](/deployments/operator/api#scmconnection) yaml template for GitHub exists in `bootstrap/pr-automation/scm.yaml`, you'll simply need to uncomment it. It should look like this: + ```yaml apiVersion: deployments.plural.sh/v1alpha1 kind: ScmConnection @@ -41,28 +59,46 @@ spec: name: github type: GITHUB ``` -### **Add an SCM Provider Webhook** -If you navigate to `https://{your-console-domain}/pr/queue` -You'll see even though the SCM connection is complete -and the PR is merged the status of the cluster creator PR is still _open_ -We need to add an SCM Webhook to fix this. +You should now be able to commit and push, and Plural will sync it in (or create a PR to merge it into your main branch): + +```sh +git commit -m "setup a Plural scm connection" +git push +``` + +{% callout severity="info" %} +`plural up` by default creates an `apps` service syncing all subfolders of the `bootstrap` folder in the repo it creates. Since we created this file in `bootstrap/pr-automation` that means that root service-of-services will sync it appropriately and we just need to `git push` to apply the change. +{% /callout %} + +## **Add an SCM Provider Webhook** + +Plural listens to incoming PRs to drive a couple of workflows: + +* labeling the status of spawned PRs generated by PR Automations. Without a webhook, they'll just remain OPEN perpetually. +* listening for PRs linked to stacks so we can generate `plan` runs for terraform/pulumi/etc, and post-back the results as a PR comment. + +If you want that functionality, simply add a webhook in your SCM by doing the following: + * **Navigate to `https://{your-console-domain}/pr/scm-webhooks`** * **Click the `Create Webhook` Button** -![](/images/how-to/create-scm-webhook-btn.png) * **Fill the Required Fields** * **Provider Type**: The SCM Provider Hosting Git Repositories * This may be obvious, but you need to select the same provider as the console webhook * **Owner**: The Organization or Group Within the SCM Provider - * **Secret**: The Webhook Secret to Share + * **Secret**: The Webhook Secret to Share, you can generate a cryptographically secure one with `plural crypto random` ![](/images/how-to/create-scm-webhook-modal-0.png) * **Click `Create`** * Copy the Webhook URL and note the secret to use within the SCM Provider Webhook ![](/images/how-to/create-scm-webhook-modal-1.png) - * **Create the Webhook with the SCM Provider** -❕ You Must be an Owner or Have Admin Access to Create Webhooks + +You can create this webhook at whatever scope you'd prefer. Depending on the scope, the permissions needed will likely vary. A simple place to start is just creating a webhook scoped to your `plural up` repository. + +Here's some docs to help you work through the webhook creation process for a lot of SCM providers: + + * [Github Repository Webhooks](https://docs.github.com/en/webhooks/using-webhooks/creating-webhooks#creating-a-repository-webhook) * [GitHub Organization Webhooks](https://docs.github.com/en/webhooks/using-webhooks/creating-webhooks#creating-an-organization-webhook) * [GitLab Group Webhooks](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#group-webhooks) * [Bitbucket Webhooks](https://confluence.atlassian.com/bitbucketserver/manage-webhooks-938025878.html) diff --git a/pages/how-to/set-up/workload-cluster.md b/pages/how-to/set-up/workload-cluster.md index 87491687..4f08d6dc 100644 --- a/pages/how-to/set-up/workload-cluster.md +++ b/pages/how-to/set-up/workload-cluster.md @@ -1,27 +1,37 @@ --- -title: Setting Up a Workload Cluster -description: Using Plural CLI to Deploy a Workload Kubernetes Cluster +title: Setting Up a Your First Workload Cluster +description: Use a Self-Service PR Automation to Provision Your first Workload Cluster --- -# Prerequisites -* **[Plural SCM Connection](/how-to/set-up/scm-connection)** -* **Plural Console `admin` permissions** +# Overview -# Set Up -### Enable the `Cluster creator` PR Automation -The Cluster Creator PR Automation CRD is created by default from `plural up` -But the [Plural SCM Connection](/how-to/set-up/scm-connection) needs to be instantiated +Now that you have a management cluster and your SCM connected, you can test-out our self-service provisioning using PR Automations and our terraform management system called [Stacks](/stacks/overview). At a high level, this is going to: + +* utilize a PR Automation (PRA) to instantiate a few CRDs into folders syncable by the root service-of-services in the `bootstrap` folder +* The `InfrastructureStack` CRD created by the PRA will create a terraform stack referencing code in the `terraform/modules/clusters/aws` folder to provision a new cluster. Your management cluster should already have been configured with sufficient IAM perms to create this cluster. Any future commits to that folder will also be tracked and generate new terraform runs to sync in changes to the desired infrastructure. + +{% callout severity="warning" %} +This Guide will not work properly unless you've finished the tutorial [Integrate with your Source Control Provider](/how-to/set-up/scm-connection). +{% /callout %} + +## Enable the `cluster-creator` PR Automation + +There should be a crd at `bootstrap/pr-automation/cluster-creator.yaml` which will create the PRA that drives this tutorial. By default it references a `github` SCMConnection crd, you'll need to have created that fully, and eventually the operator will also create the PR Automation in our API, and it will be visible in the UI as well. + +If it's not showing, navigate to the `apps` service, and you can filter on `PrAutomation` resources. You should be able to see error messages in the YAML explaining what the operator is expecting to have present. + +## **Create a Workload Cluster** + +Now that that PR Automation is configured, we should be able to spawn our cluster seamlessly. The steps are: -### **Create a Workload Cluster** -To create a new workload cluster we can use the builtin Plural _cluster-creator_ PR Automation * **Navigate to `https://{your-console-domain}/pr/automations`** -* **Click `Create a PR` in the `cluster-creator` Automation Object** +* **Click `Create a PR` on the `cluster-creator` Automation Row** ![cluster-creator pr button](/images/how-to/cluster-creator-obj.png) * **Fill in the Required Fields** * **Name**: The Name of the Cluster * **Cloud**: Cloud Provider to Deploy the Cluster (Dropdown Menu) - * **Fleet**: The Fleet to Associate the cluster - * **Tier**: The Tier to Place the Cluster + * **Fleet**: The Fleet to Associate the cluster (this is arbitrary but can help you group like clusters) + * **Tier**: The Tier to Place the Cluster (dev/staging/prod) ![cluster-creator modal 0](/images/how-to/cluster-creator-modal-0.png) * **Click `Next`** * **Enter the Name of the Branch to Create the PR** @@ -30,14 +40,16 @@ To create a new workload cluster we can use the builtin Plural _cluster-creator_ * Optionally [View The PR](https://github.com/pluralsh/plrl-how-to/pull/1) that was created ![cluster-creator modal 2](/images/how-to/cluster-creator-modal-2.png) * **Merge the PR** -* **Approve the Stack Run changes** - * Navigate to `https://{your-console-domain}/staacks` -* **Click `Pending Approval` Button on the Newly Created Stack** -![](/images/how-to/pending-approval-btn.png) -* **Once Approved the Stack Run will Execute** +Once this PR is merged, and our CD system syncs all the manifests, you should see a `clusters` service. This will have synced in the `InfrastructureStack` CRD and caused a Stack to be created at `https://{your-console-domain}/stacks`. + +By default these stacks require approval for safety (terraform can do the strangest things sometimes, you should always validate a terraform plan before applying). To do that: +* **Navigate to `https://{your-console-domain}/stacks` and click on the run which should be in `Pending Approval` state** +* **Click the `Approve` Button in the top-right** + * You can also see the plan in the run logs and the `Plan` subtab as well if you want to ensure the plan looks sane. -# Troubleshooting -[Adding A GH PR Webhook](/how-to/set-up/scm-connection#add-an-scm-provider-webhook) +{% callout severity="info" %} +Cluster provisioning usually takes quite a while. On AWS, expect the process to take upwards of 20m, it can be more like 10m on GCP. +{% /callout %} \ No newline at end of file diff --git a/src/NavData.tsx b/src/NavData.tsx index 7a4e777c..21f52c10 100644 --- a/src/NavData.tsx +++ b/src/NavData.tsx @@ -102,34 +102,23 @@ const rootNavData: NavMenu = deepFreeze([ }, { href: '/how-to', - title: 'How To', + title: 'How To Use Plural', sections: [ { - href: '/how-to/set-up', - title: 'Set Up', - sections: [ - { - title: 'Plural CLI', - href: '/getting-started/quickstart', - }, - { - title: 'Management Cluster', - // href: '/how-to/set-up/mgmt-cluster', - href: '/deployments/cli-quickstart', - }, - { - title: 'Workload Cluster', - href: '/how-to/set-up/workload-cluster', - }, - { - title: 'SCM Connection', - href: '/how-to/set-up/scm-connection', - }, - { - title: 'Controllers', - href: '/how-to/set-up/controllers', - }, - ], + title: 'Setting Up Your Management Cluster', + href: '/how-to/set-up/mgmt-cluster' + }, + { + title: 'Integrate with your Source Control Provider', + href: '/how-to/set-up/scm-connection', + }, + { + title: 'Set Up Your First Workload Cluster', + href: '/how-to/set-up/workload-cluster', + }, + { + title: 'Set Up a Network Stack and other K8s Add-Ons', + href: '/how-to/set-up/controllers', }, ], },