Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customizing OpenAI Models and Endpoint #295

Open
ImYrS opened this issue Apr 16, 2024 · 7 comments
Open

Customizing OpenAI Models and Endpoint #295

ImYrS opened this issue Apr 16, 2024 · 7 comments

Comments

@ImYrS
Copy link

ImYrS commented Apr 16, 2024

Proposal

  • Able to set model name or model list manually.
  • Able to set OpenAI Base URL manually.

Use-Case

For self-hosted, some people needs to set another OpenAI Endpoint. Like in China, api.openai.com is blocked.

And more, many user use a project called one-api to handle many models from different providers. That project can converts that models to OpenAI-compatible API, so users can use the models not from OpenAI or Claude to test prompt.

In summary, I wish these two features can be developed, and I think it is useful for many users.

Is this a feature you are interested in implementing yourself?

Maybe

@arielweinberger
Copy link
Member

Hi ImYrS, I understand the reasoning behind this change. Is this something you'd like to contribute to? This has to do with the proxy service.

@ImYrS
Copy link
Author

ImYrS commented Apr 29, 2024

Hi, glad to hear from you. But I'm sorry I don't have time to contribute at the moment.

My personal understanding is that this doesn't really require a change proxy service, maybe add some ENV vars for API Endpoint or something. And allow custom input model name.

Since I haven't read the code of this project completely, my understanding may be wrong, please point out any problems, thank you very much!

@PyrokineticDarkElf
Copy link

I'd also like to see this. Being able to set a local AI endpoint is very important to me.

@arielweinberger
Copy link
Member

Contributions for this are welcome. At this point, I don't have the time to dedicate to building this feature. Sorry.

@ranst91
Copy link
Collaborator

ranst91 commented Aug 16, 2024

@ImYrS & @PyrokineticDarkElf can you briefly describe your use case of different models and API base URL?
I started looking at it and I want to make sure I am able to provide a solution that solves this need.

For the base URL, I think a simple env variable can suffice. For the models we'll need a different solution

@PyrokineticDarkElf
Copy link

I think using an env var should work for my needs. My use case is just to use a local LLM server (Ollama, LM Studio etc) Rather than an online provider.

@ranst91
Copy link
Collaborator

ranst91 commented Aug 19, 2024

@PyrokineticDarkElf @ImYrS As you can see, there's a PR to address this issue.
Please take a look and confirm this addresses your issue before I merge it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants