llama_cpp v0.1.1
scriptis
released this
08 Nov 12:09
·
278 commits
to main
since this release
Chore
- Remove debug binary from Cargo.toml
New Features
- add
LlamaModel::load_from_file_async
Bug Fixes
- require
llama_context
is accessed from behind a mutex
This solves a race condition when severalget_completions
threads are spawned at the same time start_completing
should not be invoked on a per-iteration basis
There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.
Commit Statistics
- 5 commits contributed to the release.
- 13 days passed between releases.
- 4 commits were understood as conventional.
- 0 issues like '(#ID)' were seen in commit messages