We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi Guys, Wanted to ask does this supports proper NPU ? for llm loading and processing?
The text was updated successfully, but these errors were encountered:
No it doesn't as llama.cpp doesn't to the best of my knowledge. Once they support it, it will be supported here
Sorry, something went wrong.
FYI ... abetlen/llama-cpp-python#1702
No branches or pull requests
Hi Guys,
Wanted to ask does this supports proper NPU ? for llm loading and processing?
The text was updated successfully, but these errors were encountered: