-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
seeing double and ticks have nothing in common #204
Conversation
as usual with my stuff, take, edit, leave out as desired, if any
LGTM By cargo-c you mean the gcc_s thing? Ive filed issue about it and there is upstream issue for rust now about the same issue rust-lang/rust#82521 |
was thinking of binaries... we could create the packages in alpine... maybe-ish? |
Sorry for slow response, on vacation. You mean static-ffmpeg alpine package somehow? |
have you seen or tried multiarch/qemu-user-static ? I was able to compile for most common archs on x86_64 no problems. Seems like its also github runner compliant what i meant originally was having pre-compiled cargo-c that would reduce time spent building. And/or... already usable containers that we could just copy from. Just suggestions... multiarch is great https://www.kernel.org/doc/html/v4.14/admin-guide/binfmt-misc.html |
That is interesting! So all the emulated build issues we had seems to have been solved? the vorbis one i think got fixed while fixing arm64 things, libvpx had some hardfloat/softfloat problem and i think there was something more? How slow is it to build? if i remember correctly rav1e (uses rust) was the one that took nearly half of the build time when i tried. If we want to use normal github runners we might have to do some tricks to get over the 6h job limit (use docker cache and restart build in multiple jobs?) BTW do you know what is the difference between multiarch/qemu-user-static and docker buildx with qemu? i think it uses userland qemu thingy also. |
Hi again, i did an attempt with name: test
on: workflow_dispatch
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: docker/setup-qemu-action@v2
- run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- run: docker build -t test . But it fail build aom :(
I think i got and fixed some similar issues last time i tried emulated build. If i remember correctly at least one reason for this to happen was because some build systems, gcc etc pokes around in /proc/cpuinfo to probe things about the target, but problem is that with emulated builds /proc is still a proc for the host kernel.
How did you build? on a linux intel host? could you try arm32 if you didn't? |
ran the register locally then i used similar FROM line in dockerfile only using arm64v8 instead neon not available for arm32, maybe? no idea Oh i also set ARCH in dockerfile also as ARG ARCH |
built on (outout of 'cat /proc/cpuinfo | grep model')
40 cpus so output is shortened hahha. Running Linux latest kernel (cant show uname command, too much sensitive infos) |
Now i feel i should get a machine with more CPUs :)
No problem, mostly wanted to show that uname shows arm but /proc/cpuinfo says x86. If i remember i ran into issues like build system looked at uname to find target cpu but the later gcc looked at cpuinfo to look for arm specific things but finds x86 things (no neon feature etc). So maybe for some package builds we would need to force target in various ways. I dug up an old bug report i did about it here docker/for-mac#5245 (says mac specific in title but it's not) if your interested. |
Just so i don't forget: If we do want to use emulated builds with docker like this i think we should make sure that host details leaking into the container does not ends up affecting the build, eg cpu features gets disabled etc. BTW i've started a multiarch arm64 build as a github action, seems it got passed aom at least. 1h40m in, just started building kvazaar. |
arm64 build failed after ~2h when starting to build ra1ve, exit code 137 which mean it got signal 9 (137-128) SIGKILL... i guess OOM killed. So i'm a bit skeptical that we can get current static-ffmpeg with all current dependencies building using standard github action runners. So then either strip out some dependencies that use lot of resources or somehow run own hosts to build on, something else? none of the options seems very attractive to me. I guess we could try emulated multiarch builds for non-amd64/arm64 using the current aws spot instance build setup maybe. Not sure how much time and motivation i have to do it right now, have some other things i want to finish. But if someone else would like to give a shot let me know. |
@pldin601 you think it would be possible to do emulated armv7 build etc on a aws spot instance? enough cpu time? |
I can only guess that it is possible by running build in 32 bit Linux kernel running on arm cpu. But I don't see any 32 bit Linux machine in the AWS EC2 AMI store. |
Sorry i meant to run a amd64 linux host but then run multiarch docker image or docker buildx on that host emulating armv7 etc. So same as above but with no time restrictions and more memory. |
what about... seperate build containers/images from which we get relevant stuff in order to build whatever - ffmpeg static in this case... Would/could probably make things alot easier and drive the comminuty into action having multiple angles of attack. Maintaining multiple containers/dockerfiles would make the whole everything just a piece of cake when it comes to multiarch builds. Selecting apropriate buding materials suitable for context it for sure would be a task at first, thinking and coding the pieces selector (if i can say so) in order to build the project but.... docker and alpine makes this task easier i believe. Also... i was tinkering with alpine latest and package builder (abuild) and noticed many libraries we are building have their respective static APK packages already in the system... from memory, i think... there are more i think i just cant remember now. i built sucessfuly with the native packages already i will do a PR with libcaca enabled soonich |
Yeap something more modular/configurable would be nice. I've done some thinking and prototyping around it but always ended feeling messy. So any suggestions how to do is is welcome, maybe pro/cons for each alternative? Thinking shell scripts? makefile? use alpine package infra somehow? Will write more late about some idea i've had. I'm away visiting my parents so will be a slow at answering. |
Maybe should start a new issue about it? |
Dumped some idea for more module builds here #216 About using alpine packages or not: I don't really have any clear rules for when i've done it or not but usually have boiled down to:
I did look into if abuild could be used for module builds but i didn't find anything for handling conditional options etc. |
as usual with my stuff, take, edit, leave out as desired, if any
i just guessed a typo for ticks as my brain cant seem to see anything other than that with the content
cargo-c... also... APKBUILD? static for all arches? It wouldnt kill to at least try to solve this headache by contributing to abuild/aports - just a thought, no pressure