Skip to content
This repository has been archived by the owner on Jan 31, 2022. It is now read-only.

IPBus compatibility (e.g. the GLIB) #127

Open
lpetre-ulb opened this issue May 31, 2019 · 2 comments
Open

IPBus compatibility (e.g. the GLIB) #127

lpetre-ulb opened this issue May 31, 2019 · 2 comments

Comments

@lpetre-ulb
Copy link
Contributor

lpetre-ulb commented May 31, 2019

This issue follows up the discussion started here and continued during the developers meeting.

The main purpose of this issue is to converge on the architecture of the IPBus compatibility layer and to list the required changes in the software stack.

A first port of the current firmware and software stack to the GLIB was presented in January. It emulates the Zynq processor with a Docker container and does not require any change in the current software stack except recompilation for a different architecture.

It has been designed to be as user-friendly as possible : the emulator can be build and launched in one command only : docker-compose up. It builds and starts two Docker containers :

  1. The IPBus controlhub container which can be replaced with a bare controlhub if available.
  2. The Zynq processor emulator itself which contains all the required libraries : xerces-c, log4cplus, lmdb and a libmemsvc API-compatible library. Only one container is spawned, but expanding to multiple emulators (and so GLIBs) is trivial.

The most important part of the emulation software is the libmemsvc API-compatible library. It wraps memory-mapped calls to IPBus with the required address translation formula. This simple, but crucial, library is available in the aforementioned repository.

Regarding the repositories, I think that the current solution is ideal : a Git repository is dedicated to the GLIB emulation and contains everything which is needed to build the Docker images. It can be seen as a black-box like the CTP7 (except we control every aspect of it :-). However, the libmemsvc emulator could live somewhere in the xHAL repository, maybe in the xcompile directory.

My main point of concern is however the build system. Currently, the build system is tightly coupled with the target platform and I fear than supporting different platforms and toolchains will lead a fragile solution. My proposal would be to use CMake as much as possible. @lmoureaux and I already had great success building all parts of the GEM software with CMake in a robust way and for different platforms. Also, CMake is how the GLIB emulator software is currenly built.

@jsturdy
Copy link
Contributor

jsturdy commented Jun 2, 2019

Currently, the build system is tightly coupled with the target platform and I fear than supporting different platforms and toolchains will lead a fragile solution.

The only supported platforms are arm-linux and x86_64-linux, and currently the build system handles this quite well.
cmsgemos will not change build systems unless xDAQ upstream changes, and for simplicity, the rest of the GEM ecosystem will probably not change either

@lpetre-ulb
Copy link
Contributor Author

Currently, the build system is tightly coupled with the target platform and I fear than supporting different platforms and toolchains will lead a fragile solution.

The only supported platforms are arm-linux and x86_64-linux, and currently the build system handles this quite well.
cmsgemos will not change build systems unless xDAQ upstream changes, and for simplicity, the rest of the GEM ecosystem will probably not change either

Yes, arm-linux and x86_64-linux are supported. However, how are they supported for the same targets (in its build system sense) ? I mean, how should I build the ctp7_modules for x86_64-linux ? The Makefile directly includes mfZynq.mk and refers to .../[arm]/... paths. The same question applies to how to build xhalarm for the GLIB emulator.

Moreover, I would not consider the DAQ machine x86_64-linux and the GLIB emulator x86_64-linux as the same platform. It is the same architecture and kernel, but there will be dissimilarities on multiple packages (e.g. xDAQ has no reason to be installed in the Docker image), on the way the packages will be compiled (some kind of sysroot for the Docker image) and maybe on the way they will be installed.

Don't take me wrong, if it there is an easy and reliable way to use the current build system for multiple targets, I will use it. This is not my current feeling, though.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants