Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Llama2, Palm, Cohere, Anthropic, Replicate, Azure Models - using litellm #70

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions offline_tools/generator_questions/question_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from langchain.schema import SystemMessage
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.chat_models import ChatOpenAI, ChatLiteLLM

sys.path.append(os.path.join(os.path.dirname(__file__), '..'))

Expand All @@ -22,7 +22,7 @@ class QuestionGenerator:
openai_api_key: Optional[str] = QUESTIONGENERATOR_CONFIG.get('openai_api_key', os.getenv('OPENAI_API_KEY'))
max_tokens: Optional[int] = QUESTIONGENERATOR_CONFIG.get('max_tokens', None)

chat: BaseChatModel = ChatOpenAI(temperature=temperature, openai_api_key=openai_api_key)
chat: BaseChatModel = ChatLiteLLM(temperature=temperature, openai_api_key=openai_api_key)

def generate_qa(self, doc: str, project: str, chunk_size: int = 300):
no_answer_str = 'NO ANSWER'
Expand Down
4 changes: 3 additions & 1 deletion src_langchain/llm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ LLM is the key component to ensure the functionality of chatbot. Besides providi

The chatbot uses `ChatLLM` module to generate responses as the final answer, which will be returned to the user end. In order to adapt the LangChain agent, this must be a LangChain `BaseChatModel`.

By default, it uses `ChatOpenAI` from LangChain, which calls the chat service of OpenAI.
By default, it uses `ChatLiteLLM` from LangChain, which calls the chat service of OpenAI (and Can support 50+ LLMs including Anthropic, Cohere, Google Palm, Replicate, Llama2 )
See ChatLiteLLM usage https://python.langchain.com/docs/integrations/chat/litellm

Refer to [LangChain Models](https://python.langchain.com/en/latest/modules/models.html) for more LLM options.

### Configuration
Expand Down
4 changes: 2 additions & 2 deletions src_langchain/llm/openai_chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import sys
from typing import Optional

from langchain.chat_models import ChatOpenAI
from langchain.chat_models import ChatOpenAI, ChatLiteLLM

sys.path.append(os.path.join(os.path.dirname(__file__), '../..'))

Expand All @@ -12,7 +12,7 @@
llm_kwargs = CHAT_CONFIG.get('llm_kwargs', {})


class ChatLLM(ChatOpenAI):
class ChatLLM(ChatLiteLLM):
'''Chat with LLM given context. Must be a LangChain BaseLanguageModel to adapt agent.'''
model_name: str = CHAT_CONFIG.get('openai_model', 'gpt-3.5-turbo')
openai_api_key: Optional[str] = CHAT_CONFIG.get('openai_api_key', None)
Expand Down
Loading