Skip to content

#178 - Add support for running local LLMs via LLaMA C/C++ port #173

#178 - Add support for running local LLMs via LLaMA C/C++ port

#178 - Add support for running local LLMs via LLaMA C/C++ port #173

Triggered via pull request November 2, 2023 14:19
Status Failure
Total duration 2m 29s
Artifacts

build.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 error and 1 warning
Build
Process completed with exit code 1.
Build
No files were found with the provided path: /home/runner/work/CodeGPT/CodeGPT/build/reports/tests. No artifacts will be uploaded.