-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is dspy.OllamaLocal here to stay? #1811
Comments
Hey @NumberChiffre ! LiteLLM supports any Ollama model. I use Llama 3.2 on Ollama. What are you seeing in terms of failure cases? import dspy |
@NumberChiffre Maybe you passed |
@okhat Thanks for the suggestion, actually it worked with |
@NumberChiffre Please try ollama_chat. I think litellm makes a big distinction between the two. I doubt it'll fail often with structured outputs, but let me know if it does and if you have an example! |
If you have a self-contained failure (even one is helpful), like inputs + signature -> fail, we'd love to take a look. Especially with ollama_chat/* since ollama/* is probably just not processing the inputs correctly based on a few anecdotes from other users. |
Hey guys,
I've missed out the latest updates over the past 1-2 months. I remember update from v2.5 that we will remove
dspy.OllamaLocal
and all else to be replaced bydspy.LM
which useslitellm
under the hood.For those of us using local models, it seems we still need
dspy.OllamaLocal
since the newestllama
andqwen
models are not supported fromlitellm
: https://docs.litellm.ai/docs/providers/ollama#ollama-modelsEven for the local models that are supported, the structured format fails frequently with
dspy.LM
, whiledspy.OllamaLocal
works for most cases.Could you share any forward guidance for those of us relying on local models for your roadmap?
The text was updated successfully, but these errors were encountered: