Skip to content

Actions: li-plus/chatglm.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
374 workflow runs
374 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Upload Python Package
Upload Python Package #17: Manually run by li-plus
April 23, 2024 13:30 54s main
April 23, 2024 13:30 54s
April 23, 2024 13:28 2m 46s
Support p-tuning v2 for ChatGLM family & fix rope theta for 32k/128k …
Python package #211: Commit 829e9a7 pushed by li-plus
April 23, 2024 13:28 3m 35s main
April 23, 2024 13:28 3m 35s
Support p-tuning v2 finetuned models for ChatGLM family
CMake #233: Pull request #289 synchronize by li-plus
April 18, 2024 07:08 2m 35s ptuning-v2
April 18, 2024 07:08 2m 35s
Support p-tuning v2 finetuned models for ChatGLM family
Python package #210: Pull request #289 synchronize by li-plus
April 18, 2024 07:08 3m 42s ptuning-v2
April 18, 2024 07:08 3m 42s
Support p-tuning v2 finetuned models for ChatGLM family
Python package #209: Pull request #289 opened by li-plus
April 18, 2024 07:02 4m 19s ptuning-v2
April 18, 2024 07:02 4m 19s
Support p-tuning v2 finetuned models for ChatGLM family
CMake #232: Pull request #289 opened by li-plus
April 18, 2024 07:02 2m 28s ptuning-v2
April 18, 2024 07:02 2m 28s
Add sm89 by default for rtx 4090 (#282)
CMake #231: Commit 04910ce pushed by li-plus
March 17, 2024 02:41 2m 45s main
March 17, 2024 02:41 2m 45s
Add sm89 by default for rtx 4090 (#282)
Python package #208: Commit 04910ce pushed by li-plus
March 17, 2024 02:41 3m 18s main
March 17, 2024 02:41 3m 18s
Add sm89 by default for rtx 4090
Python package #207: Pull request #282 opened by li-plus
March 16, 2024 16:57 4m 23s cuda-cmake
March 16, 2024 16:57 4m 23s
Add sm89 by default for rtx 4090
CMake #230: Pull request #282 opened by li-plus
March 16, 2024 16:57 2m 26s cuda-cmake
March 16, 2024 16:57 2m 26s
Fix convert.py for chatglm3-6b-128k (#280)
CMake #229: Commit f7a2457 pushed by li-plus
March 13, 2024 08:24 3m 42s main
March 13, 2024 08:24 3m 42s
Fix convert.py for chatglm3-6b-128k (#280)
Python package #206: Commit f7a2457 pushed by li-plus
March 13, 2024 08:24 5m 23s main
March 13, 2024 08:24 5m 23s
Update convert.py
CMake #228: Pull request #280 synchronize by li-plus
March 13, 2024 08:23 2m 59s futz12:patch-1
March 13, 2024 08:23 2m 59s
Update convert.py
Python package #205: Pull request #280 synchronize by li-plus
March 13, 2024 08:23 4m 6s futz12:patch-1
March 13, 2024 08:23 4m 6s
Update convert.py
Python package #204: Pull request #280 synchronize by li-plus
March 13, 2024 08:16 4m 31s futz12:patch-1
March 13, 2024 08:16 4m 31s
Update convert.py
CMake #227: Pull request #280 synchronize by li-plus
March 13, 2024 08:16 3m 24s futz12:patch-1
March 13, 2024 08:16 3m 24s
Update convert.py
Python package #203: Pull request #280 opened by futz12
March 13, 2024 05:29 4m 22s futz12:patch-1
March 13, 2024 05:29 4m 22s
Update convert.py
CMake #226: Pull request #280 opened by futz12
March 13, 2024 05:29 3m 13s futz12:patch-1
March 13, 2024 05:29 3m 13s
Better cuda compile script respecting nvcc version (#279)
Python package #202: Commit 6d6bc3c pushed by li-plus
March 13, 2024 03:30 4m 26s main
March 13, 2024 03:30 4m 26s
Better cuda compile script respecting nvcc version (#279)
CMake #225: Commit 6d6bc3c pushed by li-plus
March 13, 2024 03:30 2m 55s main
March 13, 2024 03:30 2m 55s
Better cuda compile script respecting nvcc version
Python package #201: Pull request #279 opened by li-plus
March 13, 2024 02:28 5m 19s cuda-cmake
March 13, 2024 02:28 5m 19s
Better cuda compile script respecting nvcc version
CMake #224: Pull request #279 opened by li-plus
March 13, 2024 02:28 3m 55s cuda-cmake
March 13, 2024 02:28 3m 55s
Deprecate baichuan & internlm in favor of llama.cpp (#278)
CMake #223: Commit 080aa02 pushed by li-plus
March 12, 2024 04:10 2m 24s main
March 12, 2024 04:10 2m 24s
Deprecate baichuan & internlm in favor of llama.cpp (#278)
Python package #200: Commit 080aa02 pushed by li-plus
March 12, 2024 04:10 5m 27s main
March 12, 2024 04:10 5m 27s