Skip to content

Commit

Permalink
feat: Add packer build agent templates for Linux (Ubuntu Jammy 22.04,…
Browse files Browse the repository at this point in the history
… Amazon Linux 2023) (#46)
  • Loading branch information
jorisdon authored Jun 3, 2024
1 parent 3170d7c commit 1af2df2
Show file tree
Hide file tree
Showing 20 changed files with 1,176 additions and 0 deletions.
85 changes: 85 additions & 0 deletions assets/packer/build-agents/linux/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Packer templates for Linux build agants

This folder contains [Packer](https://www.packer.io/) templates for Linux build agents. You can use these templates as-is, or modify them to suit your needs.

The following templates are currently supported:
|Operating sytem | CPU architecture | file location |
|---|---|---|
|Ubuntu Jammy 22.04 | x86_64 (a.k.a. amd64) | `x86_64/ubuntu-jammy-22.04-amd64-server.pkr.hcl` |
|Ubuntu Jammy 22.04 | aarch64 (a.k.a. arm64) | `aarch64/ubuntu-jammy-22.04-arm64-server.pkr.hcl` |
|Amazon Linux 2023 | x86_64 (a.k.a. amd64) | `x86_64/amazon-linux-2023-x86_64.pkr.hcl` |
|Amazon Linux 2023 | aarch64 (a.k.a. arm64) | `aarch64/amazon-linux-2023-arm64.pkr.hcl` |

## Usage

1. Make a copy of `example.pkrvars.hcl` and adjust the input variables as needed
2. Ensure you have active AWS credentials
3. Invoke `packer build --var-file=<your .pkrvars.hcl file> <path to .pkr.hcl file>`, then wait for the build to complete.

## Software packages included

The templates install various software packages:

### common tools

Some common tools are installed to enable installing other software, performing maintenance tasks, and compile some C++ software:

* git
* curl
* jq
* unzip
* dos2unix
* [AWS CLI v2](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)
* [AWS Systems Manager Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html)
* [Amazon Corretto](https://aws.amazon.com/corretto/)
* mount.nfs, to be able to mount FSx volumes over NFS
* python3
* python3 packages: 'pip', 'requests', 'boto3' and 'botocore'
* clang
* cmake3
* scons
* Development libraries for compiling the [Amazon GameLift Server SDK for C++](https://aws.amazon.com/gamelift/)
* Development libraries for compiling the Godot 4 game engine (if available in the OS's package manager)

### mold

The '[mold](https://github.com/rui314/mold)' linker is installed to enable faster linking.

### FSx automounter service

The FSx automounter systemd service is a service written in Python that automatically mounts FSx for OpenZFS volumes on instance bootup. The service uses resource tags on FSx volumes to determine if and where to mount volumes on.

You can use the following tags on FSx volumes:
* '_automount-fsx-volume-name_' tag: specifies the name of the local mount point. The mount point specified will be prefixed with 'fsx_' by the service.
* '_automount-fsx-volume-on_' tag: This tag contains a space-delimited list of EC2 instance names on which the volume will be automatically mounted by this service (if it is running on that instance).

For example, if the FSx automounter service is running on an EC2 instance with Name tag 'ubuntu-builder', and an FSx volume has tag `automount-fsx-volume-on`=`al2023-builder ubuntu-builder` and tag `automount-fsx-volume-name`=`workspace`, then the automounter will automatically mount that volume on `/mnt/fsx_workspace`.

Note that the automounter service makes use of the [ListTagsForResource](https://docs.aws.amazon.com/fsx/latest/APIReference/API_ListTagsForResource.html) FSx API call, which is rate-limited. If you intend to scale up hundreds of EC2 instances that are running this service, then we recommend [automatically mounting FSx volumes using `/etc/fstab`](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/attach-linux-client.html).

### mount_ephemeral service

The mount_ephemeral service is a systemd service written as a simple bash script that mounts NVMe attached instance storage volume automatically as temporary storage. It does this by formatting `/dev/nvme1n1` as xfs and then mounting it on `/tmp`. This service runs on instance bootup.

### create_swap service

The create_swap service is a systemd service written as a simple bash script that creates a 1GB swap file on `/swapfile`. This service runs on instance bootup.

### sccache

'[sccache](https://github.com/mozilla/sccache)' is installed to cache c/c++ compilation artefacts, which can speed up builds by avoiding unneeded work.

sccache is installed as a _systemd service_, and configured to use `/mnt/fsx_cache/sccache` as its cache folder. The service expects this folder to be available or set up by another service.

### octobuild

'[Octobuild](https://github.com/octobuild/octobuild)' is installed to act as a compilation cache for Unreal Engine.

Octobuild is configured (in [octobuild.conf](octobuild.conf)) to use `/mnt/fsx_cache/octobuild_cache` as its cache folder, and expects this folder to be available or set up by another service.

NOTE: Octobuild is not supported on aarch64, and therefore not installed there.


## Processor architectures and naming conventions

Within this folder, the processor architecture naming conventions as reported by `uname -m` are used, hence why there are scripts here with names containing "x86_64" or "aarch64". The packer template `.hcl` files are named following the naming conventions of the operating system that they are based on. Unfortunately, because some operating systems don't use the same terminology in their naming conventions throughout, this means that you'll see this lack of consistency here has well.
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
packer {
required_plugins {
amazon = {
version = ">= 1.2.8"
source = "github.com/hashicorp/amazon"
}
}
}

variable "region" {
type = string
default = "us-west-2"
}

variable "profile" {
type = string
default = "DEFAULT"
}

variable "vpc_id" {
type = string
}

variable "subnet_id" {
type = string
}

variable "ami_prefix" {
type = string
default = "jenkins-builder-amazon-linux-2023-arm64"
}

variable "public_key" {
type = string
}

locals {
timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}

source "amazon-ebs" "al2023" {
ami_name = "${var.ami_prefix}-${local.timestamp}"
instance_type = "t4g.small"
region = var.region
profile = var.profile
source_ami_filter {
filters = {
name = "al2023-ami-2023.*-arm64"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["amazon"]
}
ssh_username = "ec2-user"
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
instance_metadata_tags = "enabled"
}
imds_support = "v2.0"

# network specific details
vpc_id = var.vpc_id
subnet_id = var.subnet_id
associate_public_ip_address = true
}

build {
name = "jenkins-linux-packer"
sources = [
"source.amazon-ebs.al2023"
]

provisioner "file" {
source = "install_common.al2023.sh"
destination = "/tmp/install_common.al2023.sh"
}
provisioner "shell" {
inline = [ <<-EOF
cloud-init status --wait
sudo chmod 755 /tmp/install_common.al2023.sh
/tmp/install_common.al2023.sh
EOF
]
}

# add the public key
provisioner "shell" {
inline = [ <<-EOF
echo "${var.public_key}" >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
EOF
]
}

provisioner "file" {
source = "install_mold.sh"
destination = "/tmp/install_mold.sh"
}
provisioner "shell" {
inline = [ <<-EOF
sudo chmod 755 /tmp/install_mold.sh
/tmp/install_mold.sh
EOF
]
}

# octobuild currently does not build on Arm64, so skipping...
#provisioner "file" {
# source = "octobuild.conf"
# destination = "/tmp/octobuild.conf"
#}
#provisioner "file" {
# source = "install_octobuild.al2023.arm64.sh"
# destination = "/tmp/install_octobuild.al2023.arm64.sh"
#}
#provisioner "shell" {
# inline = [ <<-EOF
#sudo chmod 755 /tmp/install_octobuild.al2023.arm64.sh
#/tmp/install_octobuild.al2023.arm64.sh
#sudo mkdir -p /etc/octobuild/
#sudo cp /tmp/octobuild.conf /etc/octobuild/octobuild.conf
#EOF
# ]
#}

provisioner "file" {
source = "fsx_automounter.py"
destination = "/tmp/fsx_automounter.py"
}
provisioner "file" {
source = "fsx_automounter.service"
destination = "/tmp/fsx_automounter.service"
}
provisioner "shell" {
inline = [ <<-EOF
sudo cp /tmp/fsx_automounter.py /opt/fsx_automounter.py
sudo dos2unix /opt/fsx_automounter.py
sudo chmod 755 /opt/fsx_automounter.py
sudo mkdir -p /etc/systemd/system/
sudo cp /tmp/fsx_automounter.service /etc/systemd/system/fsx_automounter.service
sudo chmod 755 /etc/systemd/system/fsx_automounter.service
sudo systemctl enable fsx_automounter.service
EOF
]
}

# set up script to automatically format and mount ephemeral storage
provisioner "file" {
source = "mount_ephemeral.sh"
destination = "/tmp/mount_ephemeral.sh"
}
provisioner "file" {
source = "mount_ephemeral.service"
destination = "/tmp/mount_ephemeral.service"
}
provisioner "shell" {
inline = [ <<-EOF
sudo cp /tmp/mount_ephemeral.sh /opt/mount_ephemeral.sh
sudo dos2unix /opt/mount_ephemeral.sh
sudo chmod 755 /opt/mount_ephemeral.sh
sudo mkdir -p /etc/systemd/system/
sudo cp /tmp/mount_ephemeral.service /etc/systemd/system/mount_ephemeral.service
sudo chmod 755 /etc/systemd/system/mount_ephemeral.service
sudo systemctl enable mount_ephemeral.service
EOF
]
}

provisioner "file" {
source = "create_swap.sh"
destination = "/tmp/create_swap.sh"
}
provisioner "file" {
source = "create_swap.service"
destination = "/tmp/create_swap.service"
}
provisioner "shell" {
inline = [ <<-EOF
sudo cp /tmp/create_swap.sh /opt/create_swap.sh
sudo dos2unix /opt/create_swap.sh
sudo chmod 755 /opt/create_swap.sh
sudo mkdir -p /etc/systemd/system/
sudo cp /tmp/create_swap.service /etc/systemd/system/create_swap.service
sudo chmod 755 /etc/systemd/system/create_swap.service
sudo systemctl enable create_swap.service
EOF
]
}

provisioner "file" {
source = "sccache.service"
destination = "/tmp/sccache.service"
}
provisioner "file" {
source = "install_sccache.sh"
destination = "/tmp/install_sccache.sh"
}
provisioner "shell" {
inline = [ <<-EOF
sudo chmod 755 /tmp/install_sccache.sh
/tmp/install_sccache.sh
sudo mkdir -p /etc/systemd/system/
sudo cp /tmp/sccache.service /etc/systemd/system/sccache.service
sudo chmod 755 /etc/systemd/system/sccache.service
sudo systemctl enable sccache.service
EOF
]
}
}
Loading

0 comments on commit 1af2df2

Please sign in to comment.