-
Notifications
You must be signed in to change notification settings - Fork 54
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Marcus Dunn
committed
Feb 8, 2024
1 parent
4d0c363
commit 95ab6e3
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
24 files
+8 −4 | CMakeLists.txt | |
+119 −50 | Makefile | |
+4 −6 | README-sycl.md | |
+86 −111 | README.md | |
+0 −40 | SHA256SUMS | |
+20 −2 | common/common.cpp | |
+76 −2 | convert-hf-to-gguf.py | |
+9 −5 | convert.py | |
+1 −13 | examples/llava/llava-cli.cpp | |
+21 −1 | examples/server/README.md | |
+242 −206 | examples/server/completion.js.hpp | |
+2 −1 | examples/server/public/completion.js | |
+59 −40 | examples/server/server.cpp | |
+111 −143 | ggml-cuda.cu | |
+2 −0 | ggml-impl.h | |
+39 −94 | ggml-quants.c | |
+68 −59 | ggml-quants.h | |
+115 −81 | ggml-sycl.cpp | |
+1,513 −1,130 | ggml-vulkan.cpp | |
+14 −9 | ggml-vulkan.h | |
+13 −10 | ggml.c | |
+21 −0 | gguf-py/gguf/constants.py | |
+243 −26 | llama.cpp | |
+50 −42 | scripts/server-llm.sh |