Skip to content

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #654

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #654

Re-run triggered December 7, 2024 01:01
Status Failure
Total duration 4m 11s
Artifacts 1

llama-cpp-rs-check.yml

on: pull_request
Run Tests on LLama Cpp Rs
3m 8s
Run Tests on LLama Cpp Rs
Check that it builds on mac
1m 12s
Check that it builds on mac
Check that it builds on windows
3m 57s
Check that it builds on windows
Matrix: Check that it builds on various targets
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 2 warnings
Check that it builds on various targets (linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c cargo build --bin simple --features cuda" did not complete successfully: exit code: 101
Check that it builds on various targets (linux/arm64)
The job was canceled because "linux_amd64" failed.
Check that it builds on various targets (linux/arm64)
The operation was canceled.
Run Tests on LLama Cpp Rs
Process completed with exit code 101.
Check that it builds on windows
Process completed with exit code 1.
Check that it builds on various targets (linux/amd64)
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
Run Tests on LLama Cpp Rs
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
utilityai~llama-cpp-rs~JKJJ6L.dockerbuild
44.1 KB