You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It has been great to see Ollama added as a first-class option in #646, this has made it easy to access a huge variety of models and has been working very well us.
I increasingly see groups and university providers using VLLM for this as well. I'm out of my depth but I understand VLLM is considered better suited when a group is serving a local model to multiple users (e.g. from a local GPU cluster, rather than everyone running an independent Ollama). It gets passing mention in some threads here as well. I think supporting more providers is all to the good and would love to see support for this as a backend similar to the existing Ollama support, though maybe I'm not understanding the details and that is unnecessary? (i.e. it looks like it might be possible to simply use the OpenAI configuration with alternative endpoint to access a VLLM server?)
The text was updated successfully, but these errors were encountered:
It looks like the team at the National Research Platform has a nice work-around for this at the moment using LiteLLM via it's OpenAI-compatible API (https://docs.litellm.ai/docs/proxy/user_keys) This works, though isn't really the same as direct VLLM support, but thought it was worth mentioning.
Which environnemental variables do I have to set-up in order to use my own litellm instance, such as https://litellm.mylaboratory.gov/
Thanks for your help
Problem/Solution
It has been great to see Ollama added as a first-class option in #646, this has made it easy to access a huge variety of models and has been working very well us.
I increasingly see groups and university providers using VLLM for this as well. I'm out of my depth but I understand VLLM is considered better suited when a group is serving a local model to multiple users (e.g. from a local GPU cluster, rather than everyone running an independent Ollama). It gets passing mention in some threads here as well. I think supporting more providers is all to the good and would love to see support for this as a backend similar to the existing Ollama support, though maybe I'm not understanding the details and that is unnecessary? (i.e. it looks like it might be possible to simply use the OpenAI configuration with alternative endpoint to access a VLLM server?)
The text was updated successfully, but these errors were encountered: