Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatal error: 'ggml.h' file not found #533

Open
samuelint opened this issue Oct 11, 2024 · 7 comments
Open

fatal error: 'ggml.h' file not found #533

samuelint opened this issue Oct 11, 2024 · 7 comments

Comments

@samuelint
Copy link

Building from project root or example.

> cargo build # Fail
> cd examples/simple
> cargo build # Fail

leads to the following error:

cargo build
   Compiling llama-cpp-sys-2 v0.1.83 (/llama-cpp-rs/llama-cpp-sys-2)
error: failed to run custom build command for `llama-cpp-sys-2 v0.1.83 (/llama-cpp-rs/llama-cpp-sys-2)`

Caused by:
  process didn't exit successfully: `/llama-cpp-rs/target/debug/build/llama-cpp-sys-2-905f5f3cc6aa6cb5/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=TARGET
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64-apple-darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64_apple_darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS
  cargo:rerun-if-changed=wrapper.h

  --- stderr
  ./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found
  thread 'main' panicked at llama-cpp-sys-2/build.rs:197:10:
  Failed to generate bindings: ClangDiagnostic("./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found\n")
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

However, building individually llama-cpp-2 & llama-cpp-sys-2 works successfully.

> cd llama-cpp-2
> cargo build # Pass
> cd llama-cpp-sys-2
> cargo build # Pass

What need to be done to have the project built on Mac?

I've tried to explicitly declare --features metal and it doesn't fix the problem.

I've followed steps described in the Hacking section of the readme https://github.com/utilityai/llama-cpp-rs/tree/main?tab=readme-ov-file#hacking

@brittlewis12
Copy link
Contributor

@samuelint i invoke the simple binary from the root of the repo like this:

cargo run --release --bin simple --features metal -- --n-len=2048 --prompt "<|start_header_id|>user<|end_header_id|>\n\nshare 5 reasons rust is better than c++<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" local ~/models/llama-3.2-3b-instruct.Q6_K.gguf

@samuelint
Copy link
Author

samuelint commented Oct 12, 2024

That command works. It seem the problem is the debug build. It only works in release.

How to make in work in debug? I also get the error with vscode rust-analyzer which is debug by default. This problem prevents me to have the IDE highlighting errors.

@brittlewis12
Copy link
Contributor

hmm interesting, i have no problems with removing the release flag and performing a build rather than a run:

cargo b --bin simple --features metal

@samuelint
Copy link
Author

@brittlewis12 on which commit is your llama.cpp project?

@brittlewis12
Copy link
Contributor

it appears pinned to 8f1d81a0

@arrowban
Copy link

@brittlewis12 I had a similar issue when running the simple example on macOS 15.0.1:

cargo run --release --bin simple -- --prompt "The way to kill a linux process is" hf-model TheBloke/Llama-2-7B-GGUF llama-2-7b.Q4_K_M.gguf
   Compiling llama-cpp-sys-2 v0.1.83 (/Users/arrowban/tauri-app/llama-cpp-rs/llama-cpp-sys-2)
   Compiling icu_normalizer v1.5.0
   Compiling clap v4.5.19
   Compiling idna v1.0.1
   Compiling url v2.5.1
   Compiling ureq v2.9.7
error: failed to run custom build command for `llama-cpp-sys-2 v0.1.83 (/Users/arrowban/tauri-app/llama-cpp-rs/llama-cpp-sys-2)`

Caused by:
  process didn't exit successfully: `/Users/arrowban/tauri-app/llama-cpp-rs/target/release/build/llama-cpp-sys-2-0c4fc171384fe637/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=TARGET
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64-apple-darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS_aarch64_apple_darwin
  cargo:rerun-if-env-changed=BINDGEN_EXTRA_CLANG_ARGS
  cargo:rerun-if-changed=wrapper.h

  --- stderr
  ./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found
  thread 'main' panicked at llama-cpp-sys-2/build.rs:197:10:
  Failed to generate bindings: ClangDiagnostic("./llama.cpp/include/llama.h:4:10: fatal error: 'ggml.h' file not found\n")
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...

It was fixed after deleting everything with rm -rf llama-cpp-rs and running the instructions again from scratch:

git clone --recursive https://github.com/utilityai/llama-cpp-rs
cd llama-cpp-rs

cargo run --release --bin simple -- --prompt "The way to kill a linux process is" hf-model TheBloke/Llama-2-7B-GGUF llama-2-7b.Q4_K_M.gguf

I'm not sure why it didn't work the first time I tried, but maybe it's because the first time around I cloned the repo without the recursive flag, and after trying cargo run ..., initiated submodules

@vargad
Copy link

vargad commented Oct 25, 2024

@arrowban same here on linux

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants