-
-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Self-hosted LLM support #661
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
@Mrjaggu Thank you opening this issue! This is already possible if the local LLM supports an "OpenAI-like" API. To do so, you should select any "OpenAI Chat" model, and set the "Base URL" field to If this doesn't meet your use-case however, then please feel free to describe your problem in more detail. For example, what self-hosted LLM services are you trying to use? |
See #389 for existing discussion on using self-hosted LLMs through the strategy I just described. |
It is possible to use a internal LLM in the same network with token provided by MS Entra ?
This returns: Then:
Step 2 - Get App Context id
How do I configure Jupyter-Ai assistant to work with that? |
Problem
To access our own custom trained LLM model using privatae endpoint hosted on local env.
The text was updated successfully, but these errors were encountered: