Skip to content

llama_cpp v0.1.1

Compare
Choose a tag to compare
@scriptis scriptis released this 08 Nov 12:09
· 278 commits to main since this release

Chore

  • Remove debug binary from Cargo.toml

New Features

  • add LlamaModel::load_from_file_async

Bug Fixes

  • require llama_context is accessed from behind a mutex
    This solves a race condition when several get_completions threads are spawned at the same time
  • start_completing should not be invoked on a per-iteration basis
    There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.

Commit Statistics

  • 5 commits contributed to the release.
  • 13 days passed between releases.
  • 4 commits were understood as conventional.
  • 0 issues like '(#ID)' were seen in commit messages

Commit Details

view details
  • Uncategorized
    • Add LlamaModel::load_from_file_async (3bada65)
    • Remove debug binary from Cargo.toml (3eddbab)
    • Require llama_context is accessed from behind a mutex (b676baa)
    • start_completing should not be invoked on a per-iteration basis (4eb0bc9)
    • Update to llama.cpp 0a7c980 (94d7385)