Skip to content

Docker images

Valentina Gaggero edited this page Mar 20, 2024 · 6 revisions

Available images

All available images are listed in the following table. They are built starting from Ubuntu 22.04.

image Content Docker file
Superbuild Contains the superbuild compiled with the default profile Dockerfile
Superbuild-icub-head Contains the superbuild compiled with the icub-head profile Dockerfile
Superbuild-icub-head-withuser as Superbuild-icub-head, but the icub-user is defined in order to facilitate the copy to/from the host solving file permission issues Dockerfile
Superbuild-icubtest Contains the superbuild compiled with robot-testing option enabled Dockerfile
Superbuild-gazebo Contains gazebo and the superbuild compiled with Gazebo option enabled Dockerfile
Superbuild - ros2 Contains the superbuild compiled with the default profile + ros2 Dockerfile

Naming convention for docker images

We store our docker images on GitHub Registry.

The same images are available on Dockerhub under the repository called icubteamcode because they are employed by the site RobotBazaar.

We currently have two types of docker images:

  • based on robotology-superbuild;
  • custom images: we compile the single repositories instead of relying on robotology-superbuild.

Images based on robotology-superbuild

The naming convention we adopted for these images follows this structure:

robotology-superbuild version robotology-superbuild project tag image type final image name
master stable sources master-stable_sources
master unstable sources master-unstable_sources
v2022.02.0 custom sources v2022.02.0_sources
master stable binaries master-stable_binaries
master unstable binaries master-unstable_binaries
v2022.02.0 custom binaries v2022.02.0_binaries

where:

  • robotology-superbuild version can be either master or a released one;
  • robotology-superbuild project tag can be:
    • Stable if stable branches are used for the robotology projects;
    • Unstable if development branches are used for the robotology projects;
    • Custom if custom yml files are used specifying the tags of the robotology projects. For released versions of superbuild, we always use custom, since releases come with their own yml files, which can be found here. To avoid confusion in the users, we don't show the custom tag in the image name.
  • image type can be:
    • sources: image used for development, where we keep the project repositories;
    • binaries: image used for production, built starting from sources, but only copying required libraries, binaries etc. from it and thus lighter.

Side note: the image names use lower case for project tags (e.g. stable instead of Stable) because docker does not allow upper case letters in the repository name

Custom images -- DEPRECATED ⚠️

Custom images are images that don't rely on robotology-superbuild, where we compile the single projects by hand instead.

The naming convention we adopted for these images follows this structure:

code version image type final image name
master sources master_sources
v2022.02.0 sources v2022.02.0_sources
master binaries master_binaries
v2022.02.0 binaries v2022.02.0_binaries

where:

  • code version is master or a release version of code;
  • image type is the same as described above

At the moment of writing, we have the following custom images:

  • gazebo
  • gui-img
  • grasp-gazebo
  • grasp-the-ball-gazebo

Automatic build pipeline

The build of our docker images is handled through the onCodeChanges GA action and is triggered in the following conditions:

  • a change in the dockerfile_images folder triggers the build of the changed image and its related children, as defined in the conf_build.ini, with the master version;

  • the events reported in the following table are sent by external repositories or code itself:

    from event code
    robotology-superbuild release_trigger => repository_trigger
    code repository_trigger =>
    robotology robotology_trigger =>
    code cron_trigger =>
    • repository_trigger: when there is a superbuild release, robotology-superbuild sends a release_trigger event to code, with a client payload including the release version. Upon this trigger, code sends a repository_trigger to itself, with a client payload including the list of images to be built. This list includes all the root images (which are not children of any other image) included in the basic folder. NOTE: This trigger will build only the release images.

    • robotology_trigger: this event is handled at the robotology organization level. To avoid duplicating the same workflow across different repositories, we installed the organization workflow app in the robotology repositories related to our images, namely speech, human-sensing, blender-robotics-utils and calibration-supervisor. We implemented a GA action that is triggered when there is a change in one of the above repositories. Additionally, in order to provide the trigger only for changes in specific directories instead of anywhere on the repository, we created this conf.ini file to map the repository and the directory to the image to be built. NOTE: This trigger will build only the master images.

    • cron_trigger: This trigger is sent by a cron_build action that is activated automatically once per week. This trigger will build all the images in code with the master version. This ensures that the master images are kept reasonably up to date with the changes to the different repositories involved. The payload of this trigger includes the list of all the root images (those that are not children of any other image). NOTE: This trigger will build only the master images.

The onCodeChanges action consists of 4 jobs:

  • a first job checks the images to be built and the related children, defining the strategy for the subsequent jobs (matrix, versions, etc...);
  • two jobs build images that depend on robotology-superbuild and on custom images. These jobs can spawn multiple instances that are executed in parallel, based on the strategy defined above, as long as there are runners available;
  • a final job, triggered after all the previous jobs are finished, checks the children and sends a repository dispatch back into code, triggering the build of all children images. The trigger depends on the original trigger: if the action was triggered by a repository_trigger, then it will use the same trigger for the children; if, on the other hand, it was triggered by any other trigger (e.g.:push, cron_trigger, etc.), it will send a cron_trigger to trigger the children. This is done to ensure the same versions are built, that is, we build the release children after the release parents, and the same for master.

An example can be seen in the following image:

onCodeChanges

The list of the workflows can be seen in the repo under the Actions tab. To filter out the workflow launched by onCodeChanges, click on the left on onCodeChanges

How to launch the docker images

Follow these instructions on robot-bazaar to launch our images and share yarp configurations, launch guis etc.

To run the speech images, you need to share the sound device, the pulse server and the json key for the authorization to the google services as follows:

docker run --rm -it --network host --device /dev/snd -e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native -v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native -v ~/.config/pulse/cookie:/root/.config/pulse/cookie --group-add $(getent group audio | cut -d: -f3) --privileged --env FILE_INPUT=team-code.json --mount type=bind,source=${HOME}/teamcode/key,target=/root/authorization --mount type=bind,source=${HOME}/.config/yarp,target=/root/.config/yarp icubteamcode/speech:master-stable_sources bash