Replies: 1 comment 2 replies
-
Thanks! To start, I think you can take a look at the existing model clients in the To support Ollama client we need to create a separate extension module under |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I created a handy class definition for making local calls with Ollama in _openai_client.py to allow almost full compatibility with tools and several models without mapping issues. This feature will help AutoGen users leverage local LLMs like Ollama easily. Below is the code, as I'm a bit clueless on how to contribute directly.
I've tested this on Ollama, and it works perfectly. However, I've noticed some compatibility issues with LMstudio.
To use this:
1 .In in autogen_core/components/models/_openai_client.py ,change this line to add LocalOpenAIClientConfiguration:
from .config import AzureOpenAIClientConfiguration, OpenAIClientConfiguration, LocalOpenAIClientConfiguration
2 . Add the following class definition to the end of the file:
3 . Add the corresponding class definition to the configuration typing definition in autogen_core/components/models/config/init.py:
4 Finally, change the imports in autogen_core/components/models/init.py to look like this:
Example Usage: Here’s an example of how to use the LocalOpenAIChatCompletionClient, which is almost identical to the existing examples:
Key Differences:
**model_capabilities are set to False by default if not passed explicitly.
**max_tokens defaults to 4096 if no value is given, which helps bypass issues with the "hacky" name change mapping in local
setups.
I hope you find this useful! I formatted it like this since I'm not sure the changes are safe, and I wanted to share how to use AutoGen core with local setups more easily.
Beta Was this translation helpful? Give feedback.
All reactions