-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running in instruct mode and model file in a different directory #35
Comments
instruct isn't a valid flag because it's encompassed in the api itself – ChatCompletion will simulate a chat response and Completion will simulate a completion specifically. So it's not a necessary flag (the app using the OpenAi API should already be doing the right "instruct" mode when necessary) For the model, you want to pass that into the gpt-app instead (like chatbot-ui or auto-gpt), typically in the |
That would be weird abuse of a variable. It would be much better to have a LOCAL_MODEL_PATH variable, and if no local model path is set, then use OpenAI's API, for example. I would favor trying to use a de facto standard local API such as text-generation-webui's API, rather than trying to reinvent the wheel by running local models directly, though. For one thing, sharing one local API means that multiple tools can use it. For another, there's a LOT of complexity in supporting local acceleration hardware and different model types and so on. Just using a standard local API makes it a lot simpler. |
@keldenl |
The thing about this is that the end goal for this project to be able to plug 'n play with any GPT-powered project – the less changes (even 0 changes like in chatbot-ui) to the code the better. |
@regstuff it sounds like you might be running into a different issue – any chance you could post what's showing up on your terminal and what the request is? (where are you using the server? chatbot-ui?) also i just merged some changes that should give u better error logging so maybe pull and then post here? |
Was wondering how I could pass the arguments --instruct and --model to the npm start command.
PORT=14003 npm start mlock ctx_size 1500 threads 12 instruct model ~/llama_models/wizardLM-7B-GGML/wizardLM-7B.ggml.q5_1.bin
I get an Args error:
instruct is not a valid argument. model is not a valid argument.
These are valid arguments for llama.cpp to run alpaca style models from a directory other than the default model folder.
The text was updated successfully, but these errors were encountered: