git clone https://github.com/kwozyman/edges.git
git clone --branch fdo-collection https://github.com/empovit/fido-device-onboard-demo.git
ansible-playbook -i inventory.yml fido-device-onboard-demo/fdo-servers.yml -e @vars.yml
ansible-playbook -i inventory.yml setup-builder.yml -e @vars.yml
ansible-playbook -i inventory.yml setup-bootinfra.yml -e @vars.yml
In order to deploy Edge Seed, you'll need to edit the global variables. In the examples above, these are contained in the vars.yml
file:
---
bootinfra_ip: 192.168.122.1
manufacturing_server_rendezvous_info_ip_address: "{{ bootinfra_ip }}"
owner_onboarding_server_owner_addresses_ip_address: "{{ bootinfra_ip }}"
master_ssh_key: "<< sshkey >>"
These values are propagated to all hosts.
In order to deploy Edge Seed, you need to edit your inventory and potentially edit the Ansible variables. An example is provided in inventory-example.yml
.
To setup the virtual machines a host named hypervisor
is required, with the following configuration:
libvirt_setup_enable
- install and configure libvirt on the hypervisorlibvirt_pool_path
- path on the hypervisor to store virtual machine imageslibvirt_networks
- a list of virtual networks to be created:name
- network namecidr
- subnetforward_mode
- set to the type of libvirt forward mode wanted. Usually this meansnat
for network address translation (preferrable in all in one or other types of demos) orbridge
for bridging to an existing physical device on the hypervisorbridge_name
- only used to name the interface to bridge to in case offorward_mode=bridge
ip
- set in order to also setup ip on hypervisor (required forforward_mode=nat
)
libvirt_isos
- a list of iso images to be made available to the hypervisorname
- iso nameurl
- url for download
libvirt_vms
- what virtual machines to createname
- vm nameiso
- what iso image to use -- seelibvirt_isos
memory
- memory size in MBcpu
- number of CPUsdisk
- disk size in GBnetworks
- what network interfaces to configurename
- libvirt network name -- seelibvirt_networks
ip
- network ip
passwd
- root passwordsshkey
- root ssh key
See the playbooks
To setup the builder machine, a host names image-builder
is required, with the following possible configuration:
admin_password
: unencrypted password to add to generated images (optional)sshkey
: public ssh key to be added to generated imagesbuilder_blueprints
:name
: blueprint/image namedescription
: text with image descriptionsshkey
: custom ssh key per blueprintfdo
: true if this is an FDO imageinstallation_device
: block device to deploy image tomanufacturing_server
: for FDO images, the manufacturing server ipmanufacturing_port
: for non-standard FDO manufacturing server portimage_type
: usuallyedge-simplified-installer
for FDOref
: ostree reference. For RHELrhel/9/x86_64/edge
packages
: list with additional packages to add to image
To setup boot infrastructure, a host named bootinfra
is required, with the following configuration:
eci_configdir
- configuration files path, defaults to/etc/eci
bootinfra_ip
- on what ip to listen for DHCPtftpboot_enabled
- enable tftpboot, defaults totrue
tftpboot_download_enable
- download iso images, defaults totrue
tftpboot_iso
- a list of bootable iso imagesname
- iso nameurl
- url for downloadfiles
- a list of files to extract from the isoshim
- bootable shim file namekernel
- kernel file to bootkargs
- kernel argumentsinitrd
- a list of initrd filesrootfs
- root filesystem filerootfs_dir
- relative directory to hard link rootfs to -- this is a workaround required for RHEL images
dhcp_enabled
- enable DHCP, defaults totrue
dhcp_hosts
- list of DHCP hostsname
- host namemac
- MAC addressboot
- what is image to boot -- seetftpboot_iso
dhcp_range
- DHCP ip range in dnsmasq formatdns_enabled
- enable DNS, defaults totrue
dns_static
- list of static DNS entriesname
- hostnameip
- ip
The following configuration will create a virtual machine bootinfra
running RHEL 9.1 and two network interfaces, one 192.168.101.2
in NAT libvirt network eci-mgmt
and one 192.168.6.1
in isolated libvirt network eci
. The OS is automatically deployed with the specified ssh key.
hypervisor:
ansible_host: hypervisor
bridges:
- name: eci
ip: 192.168.5.1/24
interface: enp7s0
libvirt_setup_enable: true
libvirt_pool_path: /var/eci/vm
libvirt_networks:
- name: eci-mgmt
cidr: 192.168.101.0/24
forward_mode: nat
ip: 192.168.101.1
- name: eci
cidr: 192.168.6.0/24
libvirt_isos:
- name: rhel91
#url: https://developers.redhat.com/content-gateway/file/rhel/9.1/rhel-baseos-9.1-x86_64-dvd.iso
url: https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.1-x86_64-dvd.iso
libvirt_vms:
- name: bootinfra
iso: rhel91
memory: 2048
cpu: 2
disk: 50
networks:
- name: eci-mgmt
ip: 192.168.101.2
- name: eci
ip: 192.168.6.1
sshkey: <<sshkey>>
After the previous virtual machine completes setup and first boot, we can configure it to automatically boot other hosts. The following configuration will setup boot infrastructure -- dhcp, dns, tftp, http on the interface with ip 192.168.6.1
-- note this is the ip configured for network eci
in the previous step. In the iso section, we describe what files to extract and what kernel to boot. The root filesystem is made available via a web server and we can instruct the kernel to use it via inst.stage2
argument. Finally, we add two hosts: dhcp-test
and testinfra
with their respective configuration.
bootinfra:
ansible_host: bootinfra
ansible_user: root
eci_configdir: /etc/eci
bootinfra_ip: 192.168.6.1
tftpboot_enabled: true
tftpboot_download_enable: true
tftpboot_iso:
- name: rhel-install
url: https://download.rockylinux.org/pub/rocky/9/isos/x86_64/Rocky-9.1-x86_64-dvd.iso
files:
- images/pxeboot/vmlinuz
- images/pxeboot/initrd.img
- images/install.img
- EFI/BOOT/BOOTX64.EFI
- EFI/BOOT/grubx64.efi
shim: BOOTX64.EFI
kernel: vmlinuz
kargs: "inst.stage2=http://{{ bootinfra_ip }}/rhel-install rd.live.check"
initrd:
- initrd.img
rootfs: install.img
rootfs_dir: images/
dhcp_enabled: true
dhcp_hosts:
- name: dhcp-test
mac: 52:54:00:29:24:96
ip: 192.168.6.50
boot: rhel-install
- name: testinfra
mac: 52:54:00:70:30:b0
ip: 192.168.6.51
boot: rhel-install
dhcp_range: "{{ bootinfra_ip }},static"
dns_enabled: true
dns_static:
- name: eci.rh-intel.com
ip: 192.168.1.18
- name: apps.eci.rh-intel.com
ip: 192.168.1.18
After the configuration is complete, if we PXE boot two UEFI hosts with the correctly configured mac addresses on the same network, they should boot into RHEL setup.