Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #654
llama-cpp-rs-check.yml
on: pull_request
Run Tests on LLama Cpp Rs
3m 8s
Check that it builds on mac
1m 12s
Check that it builds on windows
3m 57s
Matrix: Check that it builds on various targets
Annotations
5 errors and 2 warnings
Check that it builds on various targets (linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c cargo build --bin simple --features cuda" did not complete successfully: exit code: 101
|
Check that it builds on various targets (linux/arm64)
The job was canceled because "linux_amd64" failed.
|
Check that it builds on various targets (linux/arm64)
The operation was canceled.
|
Run Tests on LLama Cpp Rs
Process completed with exit code 101.
|
Check that it builds on windows
Process completed with exit code 1.
|
Check that it builds on various targets (linux/amd64)
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Run Tests on LLama Cpp Rs
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
utilityai~llama-cpp-rs~JKJJ6L.dockerbuild
|
44.1 KB |
|