Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created Ollama Langchain agent #31

Merged
merged 4 commits into from
Mar 12, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions prediction_market_agent/agents/langchain_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@


class LangChainAgent(AbstractAgent):
def __init__(self) -> None:
def __init__(self, llm=None) -> None:
keys = utils.APIKeys()
llm = OpenAI(openai_api_key=keys.openai_api_key)
llm = OpenAI(openai_api_key=keys.openai_api_key) if not llm else llm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of an optional llm parameter to the LangChainAgent class is a welcome change for flexibility. However, there are a few areas that could be improved:

  1. Default Behavior: The current implementation overwrites the llm parameter with a new OpenAI instance if it's None. This behavior is not clear from the method signature. Consider explicitly checking if llm is None before initializing a new OpenAI instance to make the logic more transparent.
  2. Error Handling: There's no explicit error handling for issues that might arise during the initialization of the OpenAI instance, such as invalid API keys. Adding error handling and logging could improve the robustness of the agent initialization.
  3. Documentation: The method could benefit from documentation comments explaining the purpose of the llm parameter and the expected type. This would help future developers understand the intended use and constraints of the parameter.

# Can use pre-defined search tool
# TODO: Tavily tool could give better results
# https://docs.tavily.com/docs/tavily-api/langchain
Expand Down
10 changes: 10 additions & 0 deletions prediction_market_agent/agents/ollama_langchain_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from langchain_community.llms.ollama import Ollama

from prediction_market_agent.agents.langchain_agent import LangChainAgent


class OllamaLangChainAgent(LangChainAgent):
def __init__(self) -> None:
# Make sure Ollama is running locally
llm = Ollama(model='mistral', base_url='http://localhost:11434') # Mistral since it supports function calling
super().__init__(llm=llm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OllamaLangChainAgent class is correctly inheriting from LangChainAgent and is initialized with an Ollama instance. However, there are a few areas that could be improved for better robustness and flexibility:

  1. Error Handling: There's no error handling for the case where the Ollama service is not running locally. Consider adding a try-except block around the Ollama initialization to catch connection errors and provide a meaningful error message.
  2. Configuration Flexibility: The model and base URL for the Ollama instance are hardcoded. It might be beneficial to allow these to be passed as parameters to the __init__ method, providing more flexibility for different environments or use cases.
  3. Performance Considerations: Ensure that the local Ollama instance is adequately optimized for the expected workload. Depending on the deployment environment, it might be necessary to adjust the configuration or scale the service to handle the load efficiently.

Loading