- Make
- Docker (with Compose)
- Account on Github (optional)
- Account on Google (optional)
At least one of the followings has to be configured:
- Setup your Pipeline GitHub OAuth application according to this guide
- Setup your Pipeline GitLab OAuth application according to this guide
- Setup your Pipeline Google OAuth application according to this guide
- Setup your Pipeline with LDAP authentication according to this guide
- Use static Email/Password authentication following the example in
etc/config/dex.yml
(staticPasswords sections)
To spin up the development environment with every dependencies just run the following command:
$ plz up
The inverse of that command is of course:
$ plz down
which removes everything.
Create a config/config.yaml
and etc/config/dex.yml
config file based on their config/*.dist
counterparts with:
$ make config/config.yaml etc/config/dex.yml
Note: If you followed the quick start guide this file should already exist.
As of now the example config enables OAuth2 based authentication. It can be changed by modifying the example.
OAuth2 based authentication requires a GitHub/Google OAuth2 application, this can be created by following this
GitHub, GitLab or the Google tutorial.
Please set the clientId
and the clientSecret
in dex.yml
's connectors:
section.
If you are not using HTTPS set auth.cookie.secure = false, otherwise you won't be able to login via HTTP and you might be getting 401 errors, locally you should set it to
false
.
The development environment uses Docker Compose to create an isolated area for Pipeline.
You can easily start it by executing:
$ plz start
This will create a mysql
and vault
container:
- Vault GUI: http://localhost:8200 login with token found in
cat ~/.vault-token
Note: If you want to customize mount points and port mappings, create a docker-compose.override.yml
file via make docker-compose.override.yml
and edit the file manually. Please note that you might need to edit the application configuration file as well.
Note: For the Cloudinfo and Recommender services to work, you need to specify a live URL in the docker-compose.override.yml
, under the environment
section of uiproxy
.
For accessing Vault the VAULT_ADDR
env var has to be set, Pipeline stores JWT access tokens there.
export VAULT_ADDR=http://127.0.0.1:8200
Once you have the docker containers running for the development environment, you should be able to start the pipeline platform.
You can install and then run it with:
$ plz build
$ plz run //cmd/pipeline
You will also need to run a Worker for background jobs:
$ plz run //cmd/worker
(Optionally, you could also build and run with VSCode or Goland.)
If you happen to get an error similar to this on the first run:
Error 1146: Table 'pipeline.amazon_eks_profiles' doesn't exist
You should set autoMigrate = true
in the database section in the config/config.yaml
file.
You should now be able to log in on the Pipeline UI: http://localhost:4200/ui
For accessing the Pipeline one has to be authenticated and registered via Dex first.
For programmatic API access an access token has to be generated.
Tokens can be generated only with a browser (for now), to do that please use the following URL to login first:
-
For local usage:
https://localhost:9090/auth/dex/login
-
For on-cloud usage:
https://{control_plane_public_ip}/auth/dex/login
Please authenticate yourself with Dex. If everything is done correctly you will be redirected. This might not work if you have blockers in your browser - if you are not being redirected, turn them off or try another browser. The browser session already contains the generated token in a cookie. An API token can be generated via:
-
For local usage:
https://localhost:9090/pipeline/api/v1/tokens
-
For on-cloud usage:
https://{control_plane_public_ip}/pipeline/api/v1/tokens
Organizations created in the Pipeline will have a domain registered in AWS's Route53 DNS Service. For this the AWS credentials have to be available in Vault in the proper format (otherwise the feature is disabled):
vault kv put secret/banzaicloud/aws \
AWS_REGION=${AWS_REGION} \
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
Creating and using EKS clusters requires to you to have the AWS IAM Authenticator for Kubernetes installed on your machine:
go get -u -v sigs.k8s.io/aws-iam-authenticator/cmd/aws-iam-authenticator
How to update default EKS images
If you need to access local anchore server, uncomment the related services in docker-compose.override.yml
and restart the environment with make start
.
You can specify your own default Anchore policy bundle by adding a json
file to the config/anchore/policies
directory.
If you want to launch PKE clusters, you will need to ensure that the pke-tool running on the cluster will access the Pipeline API.
In a development environment you can do this for example with the following
ngrok command: ngrok http https://localhost:9090
(HTTPS
requires free registration).
You will also need to adjust the pipeline.external.url
configuration value.
In the pipeline
section of config/config.yaml
you can add the value like below:
pipeline:
external:
# Base URL where the end users can reach this pipeline instance
url: "http://<YOUR_NGROK_NUMBER>.ngrok.io/pipeline"
To be able to handle S3 repositories with Pipeline helm-s3 need to be installed in case you have helm installed on your machine:
helm plugin install https://github.com/banzaicloud/helm-s3.git
In case you don't have helm installed you can download helm-s3 plugin from here, extract it to a helm-plugins
folder and set HELM_PLUGINS=helm-plugins
when starting Pipeline.