Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build failures since https://github.com/ggerganov/whisper.cpp/pull/2573 #2584

Open
ericcurtin opened this issue Nov 22, 2024 · 0 comments
Open

Comments

@ericcurtin
Copy link
Contributor

containers/ramalama#474

Just tagging @bmahabirbu because Im on holidays and hes one of the folk that looks at the cuda builds.

Such as:

-- Configuring done (6.5s)
CMake Error in ggml/src/ggml-cuda/CMakeLists.txt:
CUDA_ARCHITECTURES is set to "native", but no GPU was detected.

-- Generating done (0.0s)
CMake Generate step failed. Build files cannot be regenerated correctly.
Error: building at STEP "RUN chmod +x /scripts/*.sh && /scripts/build_llama_and_whisper.sh "cuda" "$LLAMA_CPP_SHA" "$WHISPER_CPP_SHA" "/tmp/install" "-DGGML_CUDA=ON" "-DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined"": while running runtime: exit status 1

and.....

[ 38%] Building C object ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-aarch64.c.o
/whisper.cpp/ggml/src/ggml-kompute/ggml-kompute.cpp:7:10: fatal error: shaderop_scale.h: No such file or directory
7 | #include "shaderop_scale.h"
| ^~~~~~~~~~~~~~~~~~
compilation terminated.
gmake[2]: *** [ggml/src/ggml-kompute/CMakeFiles/ggml-kompute.dir/build.make:76: ggml/src/ggml-kompute/CMakeFiles/ggml-kompute.dir/ggml-kompute.cpp.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:337: ggml/src/ggml-kompute/CMakeFiles/ggml-kompute.dir/all] Error 2
gmake[1]: *** Waiting for unfinished jobs....
[ 41%] Building C object ggml/src/ggml-cpu/CMakeFiles/ggml-cpu.dir/ggml-cpu-quants.c.o
[ 45%] Linking CXX static library libggml-cpu.a
[ 45%] Built target ggml-cpu
gmake: *** [Makefile:136: all] Error 2
Error: building at STEP "RUN chmod +x /scripts/*.sh && /scripts/build_llama_and_whisper.sh "ramalama" "$LLAMA_CPP_SHA" "$WHISPER_CPP_SHA" "/usr" "-DGGML_KOMPUTE=1"": while running runtime: exit status 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant