Skip to content

Actions: li-plus/chatglm.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
379 workflow runs
379 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #276: Pull request #305 synchronize by li-plus
June 20, 2024 06:03 3m 41s dev
dev
June 20, 2024 06:03 3m 41s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #275: Pull request #305 synchronize by li-plus
June 20, 2024 05:13 3m 53s dev
dev
June 20, 2024 05:13 3m 53s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #252: Pull request #305 synchronize by li-plus
June 20, 2024 05:13 5m 17s dev
dev
June 20, 2024 05:13 5m 17s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #251: Pull request #305 synchronize by li-plus
June 20, 2024 04:25 4m 10s dev
dev
June 20, 2024 04:25 4m 10s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #274: Pull request #305 synchronize by li-plus
June 20, 2024 04:25 3m 51s dev
dev
June 20, 2024 04:25 3m 51s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #273: Pull request #305 synchronize by li-plus
June 20, 2024 04:22 3m 34s dev
dev
June 20, 2024 04:22 3m 34s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #250: Pull request #305 synchronize by li-plus
June 20, 2024 04:22 4m 53s dev
dev
June 20, 2024 04:22 4m 53s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #272: Pull request #305 synchronize by li-plus
June 20, 2024 03:39 4m 42s dev
dev
June 20, 2024 03:39 4m 42s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #249: Pull request #305 synchronize by li-plus
June 20, 2024 03:39 5m 34s dev
dev
June 20, 2024 03:39 5m 34s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #271: Pull request #305 synchronize by li-plus
June 20, 2024 03:37 4m 22s dev
dev
June 20, 2024 03:37 4m 22s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #248: Pull request #305 synchronize by li-plus
June 20, 2024 03:37 4m 50s dev
dev
June 20, 2024 03:37 4m 50s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #247: Pull request #305 synchronize by li-plus
June 20, 2024 01:09 4m 22s dev
dev
June 20, 2024 01:09 4m 22s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #270: Pull request #305 synchronize by li-plus
June 20, 2024 01:09 3m 52s dev
dev
June 20, 2024 01:09 3m 52s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #246: Pull request #305 synchronize by li-plus
June 18, 2024 11:48 4m 54s dev
dev
June 18, 2024 11:48 4m 54s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #269: Pull request #305 synchronize by li-plus
June 18, 2024 11:48 4m 26s dev
dev
June 18, 2024 11:48 4m 26s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #245: Pull request #305 synchronize by li-plus
June 18, 2024 11:29 5m 1s dev
dev
June 18, 2024 11:29 5m 1s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #268: Pull request #305 synchronize by li-plus
June 18, 2024 11:29 3m 42s dev
dev
June 18, 2024 11:29 3m 42s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #244: Pull request #305 synchronize by li-plus
June 18, 2024 11:21 4m 52s dev
dev
June 18, 2024 11:21 4m 52s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #267: Pull request #305 synchronize by li-plus
June 18, 2024 11:21 3m 0s dev
dev
June 18, 2024 11:21 3m 0s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #266: Pull request #305 synchronize by li-plus
June 18, 2024 08:28 4m 13s dev
dev
June 18, 2024 08:28 4m 13s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #243: Pull request #305 synchronize by li-plus
June 18, 2024 08:28 2m 59s dev
dev
June 18, 2024 08:28 2m 59s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #242: Pull request #305 synchronize by li-plus
June 18, 2024 08:17 5m 1s dev
dev
June 18, 2024 08:17 5m 1s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #265: Pull request #305 synchronize by li-plus
June 18, 2024 08:17 3m 27s dev
dev
June 18, 2024 08:17 3m 27s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #241: Pull request #305 synchronize by li-plus
June 16, 2024 12:51 4m 47s dev
dev
June 16, 2024 12:51 4m 47s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #264: Pull request #305 synchronize by li-plus
June 16, 2024 12:51 3m 35s dev
dev
June 16, 2024 12:51 3m 35s