Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release llama_cpp v0.1.3 #8

Merged
merged 1 commit into from
Nov 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

54 changes: 51 additions & 3 deletions crates/llama_cpp/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,39 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## v0.1.3 (2023-11-08)

### New Features

- <csr-id-1019402eeaa6bff176a228b477486105d16d36ef/> more `async` function variants
- <csr-id-c190df6ebfd02ef5f3e0fd50d82a456ef426e6e6/> add `LlamaSession.model`

### Other

- <csr-id-0a0d5f3fce1c46f914b5f48802241f200538c4f7/> typo

### Commit Statistics

<csr-read-only-do-not-edit/>

- 5 commits contributed to the release.
- 3 commits were understood as [conventional](https://www.conventionalcommits.org).
- 0 issues like '(#ID)' were seen in commit messages

### Commit Details

<csr-read-only-do-not-edit/>

<details><summary>view details</summary>

* **Uncategorized**
- Typo ([`0a0d5f3`](https://github.com/binedge/llama_cpp-rs/commit/0a0d5f3fce1c46f914b5f48802241f200538c4f7))
- Release llama_cpp v0.1.2 ([`4d0b130`](https://github.com/binedge/llama_cpp-rs/commit/4d0b130be8f250e599908bab042431db8aa2f553))
- More `async` function variants ([`1019402`](https://github.com/binedge/llama_cpp-rs/commit/1019402eeaa6bff176a228b477486105d16d36ef))
- Add `LlamaSession.model` ([`c190df6`](https://github.com/binedge/llama_cpp-rs/commit/c190df6ebfd02ef5f3e0fd50d82a456ef426e6e6))
- Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 ([`a9e5813`](https://github.com/binedge/llama_cpp-rs/commit/a9e58133cb1c1d4d45f99a7746e0af7da1a099e1))
</details>

## v0.1.2 (2023-11-08)

### New Features
Expand All @@ -16,7 +49,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

<csr-read-only-do-not-edit/>

- 2 commits contributed to the release.
- 3 commits contributed to the release.
- 2 commits were understood as [conventional](https://www.conventionalcommits.org).
- 0 issues like '(#ID)' were seen in commit messages

Expand All @@ -27,6 +60,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
<details><summary>view details</summary>

* **Uncategorized**
- Release llama_cpp v0.1.2 ([`368a5de`](https://github.com/binedge/llama_cpp-rs/commit/368a5dec4379ccdbe7b68c40535f30e13f23d8c2))
- More `async` function variants ([`dcfccdf`](https://github.com/binedge/llama_cpp-rs/commit/dcfccdf721eb47a364cce5b1c7a54bcf94335ac0))
- Add `LlamaSession.model` ([`56285a1`](https://github.com/binedge/llama_cpp-rs/commit/56285a119633682951f8748e85c6b8988e514232))
</details>
Expand All @@ -39,24 +73,33 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- <csr-id-3eddbab3cc35a59acbe66fa4f5333a9ca0edb326/> Remove debug binary from Cargo.toml

### Chore

- <csr-id-dbdd9a4a2d813d990e5829a09fc5c8df75d9d54b/> Remove debug binary from Cargo.toml

### New Features

- <csr-id-3bada658c9139af1c3dcdb32c60c222efb87a9f6/> add `LlamaModel::load_from_file_async`
- <csr-id-bbf9f69a2dd068a3a20199ffce44d3c8a25b64d5/> add `LlamaModel::load_from_file_async`

### Bug Fixes

- <csr-id-b676baa3c1a6863c7afd7a88b6f7e8ddd2a1b9bd/> require `llama_context` is accessed from behind a mutex
This solves a race condition when several `get_completions` threads are spawned at the same time
- <csr-id-4eb0bc9800877e460fe0d1d25398f35976b4d730/> `start_completing` should not be invoked on a per-iteration basis
There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.
- <csr-id-81e5de901a3da88a97ba00c6a36e303d8708380d/> require `llama_context` is accessed from behind a mutex
This solves a race condition when several `get_completions` threads are spawned at the same time
- <csr-id-27706de1a471b317e4b7b4fdd4c5bbabfbd95ed6/> `start_completing` should not be invoked on a per-iteration basis
There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.

### Commit Statistics

<csr-read-only-do-not-edit/>

- 6 commits contributed to the release.
- 11 commits contributed to the release.
- 13 days passed between releases.
- 4 commits were understood as [conventional](https://www.conventionalcommits.org).
- 8 commits were understood as [conventional](https://www.conventionalcommits.org).
- 0 issues like '(#ID)' were seen in commit messages

### Commit Details
Expand All @@ -67,6 +110,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* **Uncategorized**
- Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 ([`ef4e3f7`](https://github.com/binedge/llama_cpp-rs/commit/ef4e3f7a3c868a892f26acfae2a5211de4900d1c))
- Add `LlamaModel::load_from_file_async` ([`bbf9f69`](https://github.com/binedge/llama_cpp-rs/commit/bbf9f69a2dd068a3a20199ffce44d3c8a25b64d5))
- Remove debug binary from Cargo.toml ([`dbdd9a4`](https://github.com/binedge/llama_cpp-rs/commit/dbdd9a4a2d813d990e5829a09fc5c8df75d9d54b))
- Require `llama_context` is accessed from behind a mutex ([`81e5de9`](https://github.com/binedge/llama_cpp-rs/commit/81e5de901a3da88a97ba00c6a36e303d8708380d))
- `start_completing` should not be invoked on a per-iteration basis ([`27706de`](https://github.com/binedge/llama_cpp-rs/commit/27706de1a471b317e4b7b4fdd4c5bbabfbd95ed6))
- Update to llama.cpp 0a7c980 ([`eb8f627`](https://github.com/binedge/llama_cpp-rs/commit/eb8f62777aa63787004771d86d34a8862b3a4157))
- Add `LlamaModel::load_from_file_async` ([`3bada65`](https://github.com/binedge/llama_cpp-rs/commit/3bada658c9139af1c3dcdb32c60c222efb87a9f6))
- Remove debug binary from Cargo.toml ([`3eddbab`](https://github.com/binedge/llama_cpp-rs/commit/3eddbab3cc35a59acbe66fa4f5333a9ca0edb326))
- Require `llama_context` is accessed from behind a mutex ([`b676baa`](https://github.com/binedge/llama_cpp-rs/commit/b676baa3c1a6863c7afd7a88b6f7e8ddd2a1b9bd))
Expand Down
2 changes: 1 addition & 1 deletion crates/llama_cpp/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "llama_cpp"
version = "0.1.2"
version = "0.1.3"
description = "High-level bindings to llama.cpp with a focus on just being really, really easy to use"
edition = "2021"
authors = ["Dakota Thompson <[email protected]>"]
Expand Down