A repository for fetching github repo's and placing them on aws s3 with versioning using git, s3 and docker.io.
You must make sure you have the following pre-requisites installed:
There are a couple of steps to getting this going. Then end result is a local dev environment :)
You will have to setup an .envrc
file in the root of the checkout directory.
The format is:
export PIPELINE_NAME=fetch
export PIPELINE_JOB_NAME=copy_concourse_fetch_source_to_s3
export AWS_DEFAULT_REGION=eu-west-1
export AWS_SECRET_ACCESS_KEY=...
export AWS_ACCESS_KEY_ID=...
export AWS_S3_BUCKET_NAME=concourse-published
export GITHUB_PRIVATE_KEY='-----BEGIN OPENSSH PRIVATE KEY-----
...
...
...
-----END OPENSSH PRIVATE KEY-----'
Don't forget to run direnv allow
once this is done.
Next we want to generate and fly the template which is explained further below, but please run the following commands from the checkout directory:
Now you want to see the result in a browser. Please browse to http://localhost:8080/
and login using the following details:
- Username:
test
- Password:
test
If everything has gone right, you should see the following pipeline output.
This is the home view of concourse.
This is the view after the pipeline has run.
These are the artifacts you should end up with.
Further details about concourse-fetch.
The control plane is extended through fetch.yml.
A simple example for fetching this repo from master would be:
- name: concourse_fetch_source
uri: [email protected]:RealOrko/concourse-fetch.git
private_key: ((github_private_key))
branch: master
version_branch: versions
After adding a new git repository you can then run the following commands to see it run locally:
If you wanted to source using promotion from a foo
release branch then you would change the repository source in fetch.yml to look like this:
- name: concourse_fetch_foo_source
uri: [email protected]:RealOrko/concourse-fetch.git
private_key: ((github_private_key))
branch: foo
version_branch: foo_versions
You could then for example release by merging master into foo, master
->foo
. Branches always have to be created in the source repositories before you fly the pipeline. Try bin/versionify for any new branches you require before flying the pipeline making sure you adjust branch names appropriately.
This repository uses the same file names to publish artifacts, so you should enable s3 bucket versioning, to support consuming these artifacts using the versioned_file
property via the s3-resource.
It is also considered good practice to make sure you setup an expiry rule for old versions, in my case I delete them after 5 days.
Happy fetching folks! :)