Preventing long run time when Python packages are required #2616
-
Hi,
I took a closer look with pip --verbose and it compiles a lot of stuff to build the wheel. Stuff I tried to get around all of this:
in the MegaLinter Step (uses: oxsecurity/megalinter@v6) in the mega-linter.yml (the github action .yml, not the .mega-linter.yml config). Any other advice? What is the proper way of doing this? Just accept 1 h run time? Or disable E0401 altogether? I was also thinking about why it does not simply use wheel files, but even when I upgrade pip and wheel before installing, it does build the wheel itself. Considering this also causes it to fail when using the Python flavor of MegaLinter (since it lacks g++) I somehow think not everyone is doing things as I am. Any ideas? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
When python wheel packages are being compiled, it is because there is no compatible wheel for the package on the platform where it has to be installed. Our images are based off Alpine Linux, and Alpine Linux uses the musl C library. However, many python packages don't release wheels for musl, even if it possible and some do. These super long build times are even more obvious in our work to build linux/arm64 containers on a linux/amd64 runner, the build times are painful. |
Beta Was this translation helpful? Give feedback.
-
As an alternative, do you know if you can cache the built wheels with the action's cache, and try to install from them when installing from your pre-commands? A wheel file is supposed to be a single compressed file where all the job is done! |
Beta Was this translation helpful? Give feedback.
-
Yeah, caching definitely seems the like the best path forward to me. Can you share your code and the error you are facing wherever you get stuck on your best attempt at caching the dependencies? See https://github.com/actions/cache/blob/main/examples.md#python---pip for guidance. |
Beta Was this translation helpful? Give feedback.
-
Thanks, both of you. Your comments helped me understand the problem better and motivated me to go on. I got it to work. If anyone can use an example that only builds the wheels for alpine, see below. It includes all requirements to build the wheels for numpy, scipy, and some other libraries. name: Build wheel on Alpine Linux
on:
push:
jobs:
build-wheel:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Cache wheel directory
id: cache-wheels
uses: actions/cache@v3
with:
path: ${{ github.workspace }}/alpine-wheels
key: ${{ runner.os }}-alpine-wheels-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-alpine-wheels-
- name: Set up Alpine Linux
uses: jirutka/setup-alpine@v1
with:
packages: >
build-base
python3-dev
py3-pip
gfortran
openblas-dev
- name: Upgrade pip and install wheel
run: |
python -m pip install --upgrade pip
python -m pip install wheel
shell: alpine.sh {0}
- name: Remove version settings
if: steps.cache-wheels.outputs.cache-hit != 'true'
run: |
sed 's/==.*//' requirements.txt > requirements_no_version.txt
shell: alpine.sh {0}
- name: Build wheels
if: steps.cache-wheels.outputs.cache-hit != 'true'
run: |
mkdir -p alpine-wheels
for package in $(cat requirements_no_version.txt); do
if ! pip download --only-binary=:all: --no-deps "$package" -d /tmp > /dev/null 2>&1; then
echo "Building wheel for $package"
pip wheel --wheel-dir=./alpine-wheels "$package"
else
echo "Compatible wheel for $package found, skipping build"
fi
done
shell: alpine.sh {0}
- name: List built wheels
run: ls -l ./alpine-wheels |
Beta Was this translation helpful? Give feedback.
As an alternative, do you know if you can cache the built wheels with the action's cache, and try to install from them when installing from your pre-commands? A wheel file is supposed to be a single compressed file where all the job is done!