-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Size optimize the VUnit images #32
Conversation
Additionally, 115 MB could be saved in mcode images by not installing
All these tests seem to be testing VPI. However saving 115 MB from the mcode image is very tempting. That would make it very minimal, and perfect for CI. It is not obvious to me that gcc should be included in the mcode docker image for the sake of being able to run vpi examples. I could add a mechanism to the GHDL testsuite so that these test do not run unless gcc is available on the system. In this scenario, if someone wants to use VPI, they would have to install a compiler of their choice on top of this image, or use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I'm good with removing caches, avoiding COPY
in the final images, and checking that installations are correct. I'm not sure about removing curl
, make
and gcc
, for now. But let's discuss it further below.
I believe this is the elephant in the room. As you guessed, GCC is added to mcode images for co-simulation purposes. When LLVM or GCC backends are used, GHDL can build C sources "internally". This allows providing VHDL sources and C sources, and let GHDL do the magic. With mcode, that's not possible, because it can only interact with pre-built shared libraries. Hence, a C compiler is required for users to convert their co-simulation C/C++ sources into a shared library. Providing it in the runtime image is convenient because it ensures that any image can be used for co-simulation with foreign languages. Having said that, you are correct; we should not force all the users to download GCC, unless they need/want to.
Please, go ahead. See examples of tests skipped for other reasons:
Related to previous comments about the purpose of these images, a few months ago some ghdl/cosim:* images were created. I think it makes sense now to make
|
I will get to it in the upcoming days. Maybe tomorrow, maybe next week. Thank for your help! |
@LukasVik, BTW, you might like wagoodman/dive. |
Yeah that tool is great. I used it a lot in this investigation. |
* Delete pip cache saves 15 MB * Not upgrading pip saves 8 MB Pip is already installed in /usr/lib/python3/dist-packages, currently version 18.1. The pip install command will not upgrade that package, but instead install a new one in /usr/local/lib, currently version 20.2.4. We hardly need a bleeding-edge pip version, and it adds size to the image. * Do not install ca-certificates Does not impact size in a noteworthy way. On the other hand it does not seem like it is necessary for the image. So in order to keep the dockerfile clean, and only shipping what is necessary, it is removed.
I have cleaned up the commits and fixed all your suggestions. I think this PR is ready to merge. |
I have created #34 for followup work so that this PR can be closed. |
Motivation: These docker images are used in CI in tsfpga and other projects. At the moment, in tsfpga, pulling the docker image represent a significant portion of the CI time. In order to save execution time, bandwidth and power, keeping docker images small should always be a priority.
In summary, -27 MB for the
stable
image, -75 MB formaster
image.See the individual commits for details and some discussion around the individual changes.