You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, since commit 905d87b70aa189623d500a28602d7a3a755a4769
llama.cpp support GPU inference with nvidia CUDA via command-line switches like --gpu-layers
Could you please consider adding support to GPT-LLAMA.CPP aswell?
Thank you!
The text was updated successfully, but these errors were encountered:
alexl83
changed the title
llama.cpp GPU support!
llama.cpp GPU support
May 14, 2023
@alexl83
Just looking at the code, since you compile llama.cpp in theory it would appear to me that you can install with cuda support, then you are just passing the argument like any of the others listed, like threads.
ie: npm start ngl 4 to hoist on to 4 gpu layers. I don't have a compatible setup to test though.
Hi, since commit 905d87b70aa189623d500a28602d7a3a755a4769
llama.cpp support GPU inference with nvidia CUDA via command-line switches like
--gpu-layers
Could you please consider adding support to GPT-LLAMA.CPP aswell?
Thank you!
The text was updated successfully, but these errors were encountered: