diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml index d8bec7228..c254f45d7 100644 --- a/.github/FUNDING.yml +++ b/.github/FUNDING.yml @@ -1,2 +1,2 @@ -custom: https://owasp.org/donate/?reponame=www-project-wrongsecrets&title=OWASP+wrongsecrets +custom: ["https://owasp.org/donate/?reponame=www-project-wrongsecrets&title=OWASP+wrongsecrets", "https://www.icrc.org/en/donate/ukraine"] github: OWASP diff --git a/.github/workflows/pre-commit.yml b/.github/workflows/pre-commit.yml new file mode 100644 index 000000000..bd2eeb48b --- /dev/null +++ b/.github/workflows/pre-commit.yml @@ -0,0 +1,51 @@ +name: Pre-commit check + +# Controls when the workflow will run +on: + pull_request: + branches: [master] + workflow_dispatch: + +env: + TF_DOCS_VERSION: v0.16.0 + TFSEC_VERSION: v1.27.6 + TFLINT_VERSION: v0.41.0 +permissions: + contents: read +jobs: + pre-commit: + name: Pre-commit check + runs-on: ubuntu-latest + steps: + - name: Checkout git repository + uses: actions/checkout@v3 + - name: Setup python + uses: actions/setup-python@v4 + with: + python-version: "3.9" + - uses: actions/cache@v3 + name: Cache plugin dir + with: + path: ~/.tflint.d/plugins + key: ${{ matrix.os }}-tflint-${{ hashFiles('.tflint.hcl') }} + - name: Setup Terraform + uses: hashicorp/setup-terraform@v2 + with: + terraform_version: 1.1.7 + - name: Setup TFLint + uses: terraform-linters/setup-tflint@v2 + with: + tflint_version: ${{env.TFLINT_VERSION}} + - name: Setup Terraform docs + run: | + wget https://github.com/terraform-docs/terraform-docs/releases/download/${{env.TF_DOCS_VERSION}}/terraform-docs-${{env.TF_DOCS_VERSION}}-linux-amd64.tar.gz -O terraform_docs.tar.gz + tar -zxvf terraform_docs.tar.gz terraform-docs + chmod +x terraform-docs + mv terraform-docs /usr/local/bin/ + - name: Setup tfsec + run: | + curl --output tfsec https://github.com/aquasecurity/tfsec/releases/download/${{env.TFSEC_VERSION}}/tfsec-linux-amd64 + chmod +x tfsec + mv tfsec /usr/local/bin/ + - name: Pre-commit checks + uses: pre-commit/action@v3.0.0 diff --git a/guides-archived-use-readme-only/aws/aws.md b/guides-archived-use-readme-only/aws/aws.md deleted file mode 100644 index 501ba68c2..000000000 --- a/guides-archived-use-readme-only/aws/aws.md +++ /dev/null @@ -1,120 +0,0 @@ -# Example Setup with AWS - -**WARNING:** The resources created in this guide will cost about \$70.00/month. The actual price might depend on its usage, but make sure to delete the resources as described in Step 5 Deinstallation when you do not need them anymore. - -## Prerequisites - -This example expects you to have the following cli tools setup. - -1. [awscli](https://aws.amazon.com/cli/) -2. [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) -3. [helm](https://helm.sh) -4. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos) - -```sh -# First we'll need a cluster, you can create one using the eksctl cli. -# This will take a couple of minutes -eksctl create cluster \ ---name wrongsecrets-ctf-party \ ---version 1.21 \ ---nodegroup-name standard-workers \ ---node-type t3.medium \ ---nodes 2 \ ---nodes-min 1 \ ---nodes-max 4 \ ---node-ami auto - -# After completion verify that your kubectl context has been updated: -# Should print something like: Administrator@wrongsecrets-ctf-party.eu-central-1.eksctl.io -kubectl config current-context -``` - -## Step 2. Installing MultiJuicer via helm - -```sh -# You'll need to add the wrongsecrets-ctf-party helm repo to your helm repos -helm repo add wrongsecrets-ctf-party https://iteratec.github.io/multi-juicer/ - -helm install wrongsecrets-ctf-party wrongsecrets-ctf-party/wrongsecrets-ctf-party - -# kubernetes will now spin up the pods -# to verify every thing is starting up, run: -kubectl get pods -# This should show you two pods a wrongsecrets-balancer pod and a unusued-progress-watchdog pod -# Wait until both pods are ready -``` - -## Step 3. Verify the app is running correctly - -This step is optional, but helpful to catch errors quicker. - -```sh -# lets test out if the app is working correctly before proceeding -# for that we can port forward the JuiceBalancer service to your local machine -kubectl port-forward service/wrongsecrets-balancer 3000:3000 - -# Open up your browser for localhost:3000 -# You should be able to see the MultiJuicer Balancer UI - -# Try to create a team and see if everything works correctly -# You should be able to access a JuiceShop instances after a few seconds after creating a team, -# and after clicking the "Start Hacking" Button - -# You can also try out if the admin UI works correctly -# Go back to localhost:3000/balancer -# To log in as the admin log in as the team "admin" -# The password for the team gets autogenerated if not specified, you can extract it from the kubernetes secret: -kubectl get secrets wrongsecrets-balancer-secret -o=jsonpath='{.data.adminPassword}' | base64 --decode -``` - -## Step 4. Add Ingress to expose the app to the world - -Create a loadbalancer which is exposed is achieved by running the following command: - -```sh -kubectl create -f https://raw.githubusercontent.com/commjoen/wrongsecrets-ctf-party/firstport-activities/guides/aws/loadbalancer.yaml -``` - -You can get the LoadBalancer's DNS record either from the AWS console, or by running: - -```sh -kubectl get services - -# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -# wrongsecrets-balancer ClusterIP 10.100.29.23 3000/TCP 3m14s -# kubernetes ClusterIP 10.100.0.1 443/TCP 11h -# wrongsecrets-ctf-party-service-loadbalancer LoadBalancer 10.100.134.210 YOUR_DNS_RECORD_WILL_BE_HERE.eu-north-1.elb.amazonaws.com 80:32111/TCP 3m13s -``` - -Use `kubectl get pods`to see the pods you have successfully running, which should be similar to - -```sh -kubectl get pods -# NAME READY STATUS RESTARTS AGE -# cleanup-job-ID-ID 0/1 Completed 0 48m -# wrongsecrets-balancer-ID-ID 1/1 Running 0 80m -# unusued-progress-watchdog-ID-ID 1/1 Running 0 80m - - -kubectl get pods -n kube-system -# NAME READY STATUS RESTARTS AGE -# alb-ingress-controller-ID-ID 1/1 Running 0 30s -# aws-node-ID 1/1 Running 0 59m -# aws-node-ID 1/1 Running 0 59m -# coredns-ID-ID 1/1 Running 0 65m -# coredns-ID-ID 1/1 Running 0 65m -# kube-proxy-ID 1/1 Running 0 59m -# kube-proxy-ID 1/1 Running 0 59m -``` - -## Step 5. Deinstallation - -```sh -helm delete wrongsecrets-ctf-party - -# Delete the loadbalancer setup -kubectl delete -f kubectl create -f https://raw.githubusercontent.com/commjoen/wrongsecrets-ctf-party/firstport-activities/guides/aws/loadbalancer.yaml - -# Delete the kubernetes cluster -eksctl delete cluster wrongsecrets-ctf-party -``` diff --git a/guides-archived-use-readme-only/aws/loadbalancer.yaml b/guides-archived-use-readme-only/aws/loadbalancer.yaml deleted file mode 100644 index e45a75f78..000000000 --- a/guides-archived-use-readme-only/aws/loadbalancer.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: wrongsecrets-balancer-public -spec: - type: LoadBalancer - selector: - app.kubernetes.io/name: wrongsecrets-ctf-party - app.kubernetes.io/instance: wrongsecrets-ctf-party - ports: - - protocol: TCP - port: 80 - targetPort: 3000 diff --git a/guides-archived-use-readme-only/azure/azure.md b/guides-archived-use-readme-only/azure/azure.md deleted file mode 100644 index 2e62b688c..000000000 --- a/guides-archived-use-readme-only/azure/azure.md +++ /dev/null @@ -1,91 +0,0 @@ -# Example Setup with Microsoft Azure - -**NOTE:** This Guide is still a "Work in Progress", if you got any recommendations or issues with it, please post them into the related issue: https://github.com/iteratec/multi-juicer/issues/16 - -**WARNING:** The resources created in this guid will cost about \$??/month. -Make sure to delete the resources as described in Step 5 Deinstallation when you do not need them anymore. - -## 1. Starting the cluster - -```sh -# Before we can do anything we need a resource group -az group create --location westeurope --name wrongsecrets-ctf-party - -# let's create the cluster now -# I decreased the node count to 2, to dodge the default core limit -az aks create --resource-group wrongsecrets-ctf-party --name juicy-k8s --node-count 2 - -# now to authenticate fetch the credentials for the new cluster -az aks get-credentials --resource-group wrongsecrets-ctf-party --name juicy-k8s - -# verify by running -# should print "juicy-k8s" -kubectl config current-context -``` - -## Step 2. Installing MultiJuicer via helm - -```bash -# You'll need to add the wrongsecrets-ctf-party helm repo to your helm repos -helm repo add wrongsecrets-ctf-party https://iteratec.github.io/multi-juicer/ - -helm install wrongsecrets-ctf-party wrongsecrets-ctf-party/wrongsecrets-ctf-party - -# kubernetes will now spin up the pods -# to verify every thing is starting up, run: -kubectl get pods -# This should show you two pods a wrongsecrets-balancer pod and a unusued-progress-watchdog pod -# Wait until both pods are ready -``` - -## Step 3. Verify the app is running correctly - -This step is optional, but helpful to catch errors quicker. - -```bash -# lets test out if the app is working correctly before proceeding -# for that we can port forward the JuiceBalancer service to your local machine -kubectl port-forward service/wrongsecrets-balancer 3000:3000 - -# Open up your browser for localhost:3000 -# You should be able to see the MultiJuicer Balancer UI - -# Try to create a team and see if everything works correctly -# You should be able to access a JuiceShop instances after a few seconds after creating a team, -# and after clicking the "Start Hacking" Button - -# You can also try out if the admin UI works correctly -# Go back to localhost:3000/balancer -# To log in as the admin log in as the team "admin" -# The password for the team gets auto generated if not specified, you can extract it from the kubernetes secret: -kubectl get secrets wrongsecrets-balancer-secret -o=jsonpath='{.data.adminPassword}' | base64 --decode -``` - -## Step 4. External Connectivity - -Create a yaml file with the following contents: - -```bash -apiVersion: v1 -kind: Service -metadata: - name: juice-loadbalancer -spec: - selector: - app.kubernetes.io/name: wrongsecrets-ctf-party - ports: - - protocol: TCP - port: 80 - targetPort: 3000 - type: LoadBalancer -``` - -Then, create the new Service with the following using the kubectl command. The Azure Cloud Shell (https://shell.azure.com) can be used for this. - -```bash -kubectl create -f loadbalancer.yaml -``` - -## Step 5. SSL - -To expose multi-juicer over https you should use a propper ingress controller instead of just a loadbalancer. This will give you far better control. Remove the loadbalancer from step 4 once you have setup the https connection. To continue follow [the multi-juicer azure ssl guide](ssl.md) diff --git a/guides-archived-use-readme-only/azure/ssl.md b/guides-archived-use-readme-only/azure/ssl.md deleted file mode 100644 index 000740093..000000000 --- a/guides-archived-use-readme-only/azure/ssl.md +++ /dev/null @@ -1,218 +0,0 @@ - -# Setup SSL for multi-juicer with Microsoft Azure - -Following this guide, you should be able to setup a https ingress with certificates from letsencrypt. https://docs.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli . -This guide is based on the official microsoft guide and tested on azure. - -Please note: Make sure that you haven't setup an ingress already. There is an option for configuring an ingress when installing multi-juicer with your own `values.yaml`. This guide require that you haven't ventured down that path. - -## 1. Create the Container registry - -First create a container registry where you will be storing the images for your cert-manager and nginx controller - -NB: Please note that you will need to pick a new name for the registry since the name needs to be globally unique. - -```bash -az acr create -n -g wrongsecrets-ctf-party --sku basic - -``` - -Then connect the registry with your multi-juicer kubernetes cluster - -```bash -az aks update -n juicy-k8s -g wrongsecrets-ctf-party --attach-acr -``` - -Import the images for the cert-manager and nginx controller - -```bash -REGISTRY_NAME= -SOURCE_REGISTRY=k8s.gcr.io -CONTROLLER_IMAGE=ingress-nginx/controller -CONTROLLER_TAG=v1.0.4 -PATCH_IMAGE=ingress-nginx/kube-webhook-certgen -PATCH_TAG=v1.1.1 -DEFAULTBACKEND_IMAGE=defaultbackend-amd64 -DEFAULTBACKEND_TAG=1.5 -CERT_MANAGER_REGISTRY=quay.io -CERT_MANAGER_TAG=v1.5.4 -CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller -CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook -CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector - -az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG -az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG -az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG -az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG -az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG -az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG - -``` - -## 2. Create an nginx ingress controller - -```bash -# Add the ingress-nginx repository -helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx - -# Set variable for ACR location to use for pulling images -ACR_URL=.azurecr.io - -# Use Helm to deploy an NGINX ingress controller -helm install nginx-ingress ingress-nginx/ingress-nginx \ - --version 4.0.13 \ - --namespace default --create-namespace \ - --set controller.replicaCount=2 \ - --set controller.nodeSelector."kubernetes\.io/os"=linux \ - --set controller.image.registry=$ACR_URL \ - --set controller.image.image=$CONTROLLER_IMAGE \ - --set controller.image.tag=$CONTROLLER_TAG \ - --set controller.image.digest="" \ - --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \ - --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ - --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ - --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ - --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \ - --set controller.admissionWebhooks.patch.image.digest="" \ - --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ - --set defaultBackend.image.registry=$ACR_URL \ - --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ - --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \ - --set defaultBackend.image.digest="" -``` - -## 3. Configure a FQDN - -Get the external ip of the controller - -```bash -$ kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_IP 80:32486/TCP,443:30953/TCP 44s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx -``` - -Configure a FQDN for your external ip. The DNS has to be unique - -NB: See this guide if you have your own domain: https://docs.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli#add-an-a-record-to-your-dns-zone - -```bash - -# Public IP address of your ingress controller -IP="MY_EXTERNAL_IP" - -# Name to associate with public IP address -DNSNAME="my-multi-juicer" - -# Get the resource-id of the public ip -PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv) - -# Update public ip address with DNS name -az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME - -``` - -## 4. Install the cert-manager - -```bash -# Label the default namespace to disable resource validation -kubectl label namespace default cert-manager.io/disable-validation=true - -# Add the Jetstack Helm repository -helm repo add jetstack https://charts.jetstack.io - -# Update your local Helm chart repository cache -helm repo update - -# Install the cert-manager Helm chart -helm install cert-manager jetstack/cert-manager \ - --namespace default \ - --version $CERT_MANAGER_TAG \ - --set installCRDs=true \ - --set nodeSelector."kubernetes\.io/os"=linux \ - --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \ - --set image.tag=$CERT_MANAGER_TAG \ - --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \ - --set webhook.image.tag=$CERT_MANAGER_TAG \ - --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \ - --set cainjector.image.tag=$CERT_MANAGER_TAG -``` -## 5. Create a cluster issuer - -Save the following content as `cluster-issuer.yaml` -```bash -apiVersion: cert-manager.io/v1 -kind: ClusterIssuer -metadata: - name: letsencrypt -spec: - acme: - server: https://acme-v02.api.letsencrypt.org/directory - email: MY_EMAIL_ADDRESS - privateKeySecretRef: - name: letsencrypt - solvers: - - http01: - ingress: - class: nginx - podTemplate: - spec: - nodeSelector: - "kubernetes.io/os": linux -``` -To create the issuer, use the kubectl apply command. -```bash -kubectl apply -f cluster-issuer.yaml -``` - -## 6. Create an ingress route - -Save the following content as `ingress.yaml` - -```bash -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: my-wrongsecrets-ctf-party - annotations: - cert-manager.io/cluster-issuer: letsencrypt -spec: - ingressClassName: nginx - tls: - - hosts: - - my-wrongsecrets-ctf-party..cloudapp.azure.com - secretName: tls-secret - rules: - - host: my-wrongsecrets-ctf-party..cloudapp.azure.com - http: - paths: - - path: / - pathType: ImplementationSpecific - backend: - service: - name: wrongsecrets-balancer - port: - number: 3000 -``` -Edit the host so that it corresponds to the host associated with your public ip. You find the hostname of your public ip by executing this command. - -```bash -az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv -``` - - -Create the ingress route - -```bash -kubectl apply -f ingress.yaml --namespace default -``` - -Verify that the a certificate has been assigned. NB: can take several minutes. - -```bash -$ kubectl get certificate --namespace default - -NAME READY SECRET AGE -tls-secret True tls-secret 11m -``` -Test that the ingress is working by opening a browser and trying the host address. Make sure that you use https diff --git a/guides-archived-use-readme-only/digital-ocean/digital-ocean.md b/guides-archived-use-readme-only/digital-ocean/digital-ocean.md deleted file mode 100644 index 00e1cc1e2..000000000 --- a/guides-archived-use-readme-only/digital-ocean/digital-ocean.md +++ /dev/null @@ -1,96 +0,0 @@ -# Example Setup with Digital Ocean - -**WARNING:** The resources created in this guide will cost about \$45.00/month. -Make sure to delete the resources as described in "Step 5 Deinstallation" when you do not need them anymore. - -## Prerequisites - -This example expects you to have the following cli tools setup. - -1. [doctl](https://github.com/digitalocean/doctl) -2. [helm](https://helm.sh) -3. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos) - -## Step 1. Starting the cluster - -```bash -# First we'll need a cluster, you can create one using the DigitalOcean cli. -# This will take a couple of minutes -doctl kubernetes cluster create juicy-k8s - -# After completion verify that your kubectl context has been updated: -# Should print something like: do-nyc1-juicy-k8s -kubectl config current-context -``` - -## Step 2. Installing MultiJuicer via helm - -```bash -# You'll need to add the wrongsecrets-ctf-party helm repo to your helm repos -helm repo add wrongsecrets-ctf-party https://iteratec.github.io/multi-juicer/ - -helm install wrongsecrets-ctf-party wrongsecrets-ctf-party/wrongsecrets-ctf-party - -# kubernetes will now spin up the pods -# to verify every thing is starting up, run: -kubectl get pods -# This should show you two pods a wrongsecrets-balancer pod and a unusued-progress-watchdog pod -# Wait until both pods are ready -``` - -## Step 3. Verify the app is running correctly - -This step is optional, but helpful to catch errors quicker. - -```bash -# lets test out if the app is working correctly before proceeding -# for that we can port forward the JuiceBalancer service to your local machine -kubectl port-forward service/wrongsecrets-balancer 3000:3000 - -# Open up your browser for localhost:3000 -# You should be able to see the MultiJuicer Balancer UI - -# Try to create a team and see if everything works correctly -# You should be able to access a JuiceShop instances after a few seconds after creating a team, -# and after clicking the "Start Hacking" Button - -# You can also try out if the admin UI works correctly -# Go back to localhost:3000/balancer -# To log in as the admin log in as the team "admin" -# The password for the team gets auto generated if not specified, you can extract it from the kubernetes secret: -kubectl get secrets wrongsecrets-balancer-secret -o=jsonpath='{.data.adminPassword}' | base64 --decode -``` - -## Step 4. Add a LoadBalancer to expose the app to the world - -DigitalOcean lets you create a DigitalOcean Loadbalancer to expose your kubernetes deployment without having to setup the whole kubernetes ingress stuff. This makes it especially easy if you also manage your domains in DigitalOcean as DigitalOcean will also be able to provide you with the tls certificates. - -```bash - -# Get you digitalocean cert id -doctl compute certificate list - -# We got a example loadbalancer yaml for this example in the repository -# Edit the cert id in do-lb.yaml to the cert id of your domain -wget https://raw.githubusercontent.com/iteratec/multi-juicer/main/guides/digital-ocean/do-lb.yaml -vim do-lb.yaml - -# Create the loadbalancer -# This might take a couple of minutes -kubectl create -f do-lb.yaml - -# If it takes longer than a few minutes take a detailed look at the loadbalancer -kubectl describe services wrongsecrets-ctf-party-loadbalancer -``` - -## Step 5. Deinstallation - -```bash -helm delete wrongsecrets-ctf-party - -# Delete the loadbalancer -kubectl delete -f do-lb.yaml - -# Delete the kubernetes cluster -doctl kubernetes cluster delete juicy-k8s -``` diff --git a/guides-archived-use-readme-only/digital-ocean/do-lb.yaml b/guides-archived-use-readme-only/digital-ocean/do-lb.yaml deleted file mode 100644 index c808ab9a7..000000000 --- a/guides-archived-use-readme-only/digital-ocean/do-lb.yaml +++ /dev/null @@ -1,22 +0,0 @@ -kind: Service -apiVersion: v1 -metadata: - name: multi-juicer-loadbalancer - annotations: - # availible annotations to configure do loadbalancer: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md - service.beta.kubernetes.io/do-loadbalancer-protocol: 'http2' - service.beta.kubernetes.io/do-loadbalancer-certificate-id: #'b0d0a68b-25e9-4881-8be9-d4cf8fc6cc4d' - service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: 'true' - service.beta.kubernetes.io/do-loadbalancer-algorithm: 'round_robin' - service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: 'http' - service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: '/balancer/' -spec: - type: LoadBalancer - selector: - app.kubernetes.io/instance: wrongsecrets-ctf-party - app.kubernetes.io/name: wrongsecrets-ctf-party - ports: - - name: http - protocol: TCP - port: 443 - targetPort: 3000 diff --git a/guides-archived-use-readme-only/openshift/openshift.md b/guides-archived-use-readme-only/openshift/openshift.md deleted file mode 100644 index be74995d1..000000000 --- a/guides-archived-use-readme-only/openshift/openshift.md +++ /dev/null @@ -1,74 +0,0 @@ -# Example Setup with OpenShift - -**NOTE:** This Guide was tested with OpenShift 3.11, if this doesn't work with newer OpenShift versions please open up an issue. Thank you! 👏 - -## Prerequisites - -This example expects you to have the following prerequisites. - -1. A running OpenShift Cluster -2. [oc](https://github.com/openshift/origin/releases), OpenShift CLI -3. [helm](https://helm.sh), helm3 recommended, to avoid tiller headache - -## Step 1. Log into the OpenShift cluster - -```bash -# Log in with your OpenShift CLI -# You can copy the login command including your token from the web ui -oc login https://console.openshift.example.com --token=**** - -# Create a new project to hold the wrongsecrets-ctf-party resources -oc new-project wrongsecrets-ctf-party -``` - -## Step 2. Installing MultiJuicer via helm - -```bash -# You'll need to add the wrongsecrets-ctf-party helm repo to your helm repos -helm repo add wrongsecrets-ctf-party https://iteratec.github.io/multi-juicer/ - -helm install wrongsecrets-ctf-party wrongsecrets-ctf-party/wrongsecrets-ctf-party ./wrongsecrets-ctf-party/helm/wrongsecrets-ctf-party/ -``` - -## Step 3. Verify the app is running correctly - -This step is optional, but helpful to catch errors quicker. - -```bash -# lets test out if the app is working correctly before proceeding -# for that we can port forward the JuiceBalancer service to your local machine -oc port-forward service/wrongsecrets-balancer 3000:3000 - -# Open up your browser for localhost:3000 -# You should be able to see the MultiJuicer Balancer UI - -# Try to create a team and see if everything works correctly -# You should be able to access a JuiceShop instances after a few seconds after creating a team, -# and after clicking the "Start Hacking" Button - -# You can also try out if the admin UI works correctly -# Go back to localhost:3000/balancer -# To log in as the admin log in as the team "admin" -# The password for the team gets auto generated if not specified, you can extract it from the kubernetes secret: -oc get secrets wrongsecrets-balancer-secret -o=jsonpath='{.data.adminPassword}' | base64 --decode -``` - -## Step 4. Add a route to expose the app to the world - -OpenShift lets you create routes to expose your app to the internet. - -```bash -# Create the route. -# Make sure to adjust the hostname to match the one of your org. -# You can also perform this step easily via the OpenShift web ui. -oc create route edge wrongsecrets-balancer --service wrongsecrets-balancer --hostname wrongsecrets-ctf-party.cloudapps.example.com -``` - -## Step 4. Deinstallation - -```bash -helm delete wrongsecrets-ctf-party - -# Delete the route -oc delete route edge wrongsecrets-balancer -``` diff --git a/guides/aws/aws.md b/guides/aws/aws.md new file mode 100644 index 000000000..43338c5e0 --- /dev/null +++ b/guides/aws/aws.md @@ -0,0 +1,3 @@ +# Example Setup with AWS + +Please check the [aws folders readme file](../aws/README.md). diff --git a/guides-archived-use-readme-only/k8s/k8s-juice-service.yaml b/guides/k8s/k8s-juice-service.yaml similarity index 100% rename from guides-archived-use-readme-only/k8s/k8s-juice-service.yaml rename to guides/k8s/k8s-juice-service.yaml diff --git a/guides-archived-use-readme-only/k8s/k8s.md b/guides/k8s/k8s.md similarity index 100% rename from guides-archived-use-readme-only/k8s/k8s.md rename to guides/k8s/k8s.md diff --git a/guides-archived-use-readme-only/monitoring-setup/monitoring.md b/guides/monitoring-setup/monitoring.md similarity index 100% rename from guides-archived-use-readme-only/monitoring-setup/monitoring.md rename to guides/monitoring-setup/monitoring.md diff --git a/guides-archived-use-readme-only/monitoring-setup/prometheus-operator-config.yaml b/guides/monitoring-setup/prometheus-operator-config.yaml similarity index 100% rename from guides-archived-use-readme-only/monitoring-setup/prometheus-operator-config.yaml rename to guides/monitoring-setup/prometheus-operator-config.yaml diff --git a/guides-archived-use-readme-only/production-notes/production-notes.md b/guides/production-notes/production-notes.md similarity index 100% rename from guides-archived-use-readme-only/production-notes/production-notes.md rename to guides/production-notes/production-notes.md diff --git a/readme.md b/readme.md index 0ce1a3579..fe8828524 100644 --- a/readme.md +++ b/readme.md @@ -1,39 +1,46 @@ # WrongSecrets CTF Party _Powered by MultiJuicer_ -This is a fork of MultiJuicer, which is now being rebuilt in order to server WrongSecret in creating CTFs. The tracking isssue of the first endavour can be found at https://github.com/commjoen/wrongsecrets/issues/403 . +Want to play OWASP WrongSecrets in a large group in CTF mode, but not go over all the hassle of setting up local copies of OWASP WrongSecrets? Here is OWASP WrongSecrets CTF Party! This is a fork of OWASP MultiJuicer, which is adapted to become a dynamic multi-tenant setup for doing a CTF together! Note that we: -- have a Webtop integrated -- have a WrongSecrets instance integrated +- have a [Webtop](https://docs.linuxserver.io/images/docker-webtop) integrated for each player +- have a WrongSecrets instance integrated for each player - A working admin interface which can restart both or delete both (by deleting the full namespace) - Do not support any progress watchdog as you will have access to it, we therefore disabled it. ## Special thanks -Special thanks to Madhu Akula, Ben de Haan, and Mike Woudenberg for making this port a reality! +Special thanks to [@madhuakula](https://github.com/madhuakula), [@bendehaan](https://github.com/bendehaan) , and [@mikewoudenberg](https://github.com/mikewoudenberg) for making this port a reality! ## What you need to know This environment uses a webtop and an instance of wrongsecrets per user. This means that you need per user: -- 2.5 CPU (min = 1 , limit = 2.5) -- 3.5 GB RAM (min 2.5GB, limit = 3.5GB) +- 2.5 CPU (min = 0.5 , limit = 2.5) +- 3.5 GB RAM (min 1 GB, limit = 3.5GB) - 8GB HD (min 3 GB, limit = 8GB) ### Running this on minikube -A 3-6 contestant game can be played on a local minikube with updated cpu & memory settings (e.g. 6 CPUs, 9 GB ram). +A 3-6 contestant game can be played on a local minikube with updated cpu & memory settings (e.g. 6 virtual CPUs, 9 GB ram). ### Running this on AWS EKS with larger groups -A 100 contestant game can be played on the AWS setup, which will require around 200 (100-250) CPUs, 300 (250-350) GB Ram, and 800 GB of storage available in the cluster. Note that we have configured everything based on autoscaling in AWS. This means that you can often start with a cluster about 20% of the size of the "limit" numbers and then see how things evolve. If you see heavy under-utilization as players are not very actively engaged: you can often scale down the amount of nodes required. +A 100 contestant game can be played on the AWS setup, which will require around 200 (100-250) CPUs, 300 (250-350) GB Ram, and 800 GB of storage available in the cluster. Note that we have configured everything based on autoscaling in AWS. This means that you can often start with a cluster about 20% of the size of the "limit" numbers and then see how things evolve. Note that this is only the case when all players are very actively fuzzing the WrongSecrets app, while runnign heavy appss on their Webtops. Very often, you will see that you are using just 25% of what is painted here. So, by using our terraform (including an autoscaling managed nodegroup), you can reduce the cost of your CTF by a lot! -## Status +## Status - Experimental release -**This is by no means ready for anything, and work in progress.** +This is an experimental release. It showed to work at 2 CTFs already, we just did not complete the documentation and the cleaning up of the Helm chart yet. However: it is working in its basis, and can support a good crowd. Currently, we only support using Minikube and AWS EKS (_**Please follow the readme in the AWS folder if you want to use EKS, as the guides section is not updated yet**_). -Still want to play? Ok, here we go: +## How to use it -We currently only support minikube and AWS EKS (_**Please follow the readme in the aws folder, as the guides section is not updated yet**_). +In general, we have two type of setups: one is the "manual" setup, for which you need to put all the answers + +### Automated setup + + + +### Manual setup: + + -## How to use it You need 3 things: - This infrastructure