Skip to content

Commit

Permalink
Add instructions for pre-built image on Docker Hub (#109)
Browse files Browse the repository at this point in the history
  • Loading branch information
li-plus authored Aug 23, 2023
1 parent 1cfac4a commit 981c088
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 237 deletions.
17 changes: 13 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,14 +320,23 @@ docker build . --network=host -t chatglm.cpp-cuda \
docker run -it --rm --gpus all -v $PWD:/chatglm.cpp/models chatglm.cpp-cuda ./build/bin/main -m models/chatglm-ggml.bin -p "你好"
```
**Option 2: Pulling from GHCR**
**Option 2: Using Pre-built Image**
Pre-built image for CPU inference is published on GitHub Container Registry (GHCR). Download it with the below script and use it in the same way:
The pre-built image for CPU inference is published on both [Docker Hub](https://hub.docker.com/repository/docker/liplusx/chatglm.cpp) and [GitHub Container Registry (GHCR)](https://github.com/li-plus/chatglm.cpp/pkgs/container/chatglm.cpp).
To pull from Docker Hub and run demo:
```sh
docker run -it --rm -v $PWD:/opt liplusx/chatglm.cpp:main \
./build/bin/main -m /opt/chatglm-ggml.bin -p "你好"
```
To pull from GHCR and run demo:
```sh
docker pull ghcr.io/li-plus/chatglm.cpp:main
docker run -it --rm -v $PWD:/opt ghcr.io/li-plus/chatglm.cpp:main \
./build/bin/main -m /opt/chatglm-ggml.bin -p "你好"
```
Visit [container/chatglm.cpp](https://github.com/li-plus/chatglm.cpp/pkgs/container/chatglm.cpp) for more information.
Python demo and API servers are also supported in pre-built image. Use it in the same way as **Option 1**.
## Performance
Expand Down
10 changes: 0 additions & 10 deletions api_Dockerfile

This file was deleted.

223 changes: 0 additions & 223 deletions examples/api_demo.py

This file was deleted.

0 comments on commit 981c088

Please sign in to comment.