-
-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opening the Assistant to accept custom LLM #810
base: main
Are you sure you want to change the base?
Conversation
qarol
commented
Oct 3, 2024
- Removed references to the private method in the Assistant’s specs
- Use the LLM adapter in the Assistant instance whenever possible, instead of calling LLM directly
@qarol Thank you for your PR. You'll need to upmerge it with
Would you please explain what the benefits is this are? |
Hi @andreibondarev, In the next step, I was thinking of removing all the if's from the Assistant that determine the type of LLM object. Instead, we could move that logic into dedicated adapters, with each knowing how to handle things (what to do if instruction changes?). I think it would be great to make the Assistant compatible with any LLM, not just the ones available in that gem. |
7c48346
to
fc75555
Compare
Hey @andreibondarev, I've changed a little an approach by allowing to inject custom LLM adapter when Assistant is initialized. That brings more flexibility to use it with totally custom LLM which is not yet covered by this gem. I think it can bring some value and also doesn't introduces any breaking changes. Let me know what you think :) |
@qarol Do you have a use-case where you're trying to hook in an LLM, that we don't yet support, working with the Assistant? |
Hey @andreibondarev, I’m required to use a closed wrapper built on top of multiple LLMs. It includes some policies to prevent data leaks, etc. The abstraction that the Assistant provides is something I’d like to use without adding any monkey patches to the code. I hope this clarifies the goal a bit. |
11c9444
to
ec4f021
Compare
@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor? |
Sorry for the delay! I'm going to take a careful look at this tomorrow! |
So in order to make it work you'd need to pass both custom |
lib/langchain/assistant.rb
Outdated
@@ -37,6 +37,7 @@ class Assistant | |||
# @param add_message_callback [Proc] A callback function (Proc or lambda) that is called when any message is added to the conversation | |||
def initialize( | |||
llm:, | |||
llm_adapter: LLM::Adapter.build(llm), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this even work? Isn't the llm == nil
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can do it for keyword arguments because those arguments are evaluated from left to right. So the llm
will not be equal nil
in that case (unless you provide explicitly nil
) :)
Yes, and agree that's not fully elegant yet. llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
adapter = Langchain::Assistant::LLM::Adapters::OpenAI.new(llm: llm)
assistant = Langchain::Assistant.new(llm: adapter) You would have to provide a wrapped llm instead of pure llm to assistant. Different approach might be introducing dedicated factory for registered LLM: module Langchain
class Assistant
def self.for(llm:)
new(llm: LLM::Adapter.build(llm: llm))
end
def initialize(llm:)
raise ArgumentError unless llm.is_a?(LLM::Adapters::Base)
# do something
end
end
end
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
assistant = Langchain::Assistant.for(llm: llm)
# or
llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm) Or it can be simplified to something like this (forgive me pseudocode): module Langchain
class Assistant
def initialize(llm:)
case llm
when Langchain::LLM::Base
@llm = Langchain::Assistant::LLM::Adapter.build(llm)
when Langchain::Assistant::LLM::Adapters::Base
@llm = llm
else
raise "Invalid LLM"
end
end
# Use only @llm everywhere
end
end
llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm) I think the last option would be the simplest 🤔 |
@andreibondarev I've implemented the latest concept. WDYT? |
@qarol Hmmm... these are pretty big changes. How much flexibility do you have to override and monkey-patch classes in your own application? Could you overwrite this class and the build method to whatever you want? https://github.com/patterns-ai-core/langchainrb/blob/main/lib/langchain/assistant/llm/adapter.rb. Also -- I think what's going to happen at some point is that these types of classes (e.g.: |
@andreibondarev thanks for your feedback. Opening an adapter's factory is an option too. I have something like that: module Langchain
class Assistant
module LLM
class Adapter
class << self
prepend(Module.new {
def build(llm)
if llm.is_a?(CustomLLM)
CustomAdapter.new
else
super
end
end
})
end
end
end
end
end That builder can become also an AdapterRegistry or something. |
The Ollama LLM creates a message with system role for provided instructions in Assistant.
Keep the logic only in place that determines whether the instruction should be placed as a system message or not.
Added `llm_adapter` parameter to the `Assistant` class constructor. Allows initialization of the Assistant with custom language models and adapters. Ensures more flexibility in choosing and managing LLM integrations
Don't call the LLM directly. Use wrapper instead in Assistant.
That makes an Assistant open to very custom LLMs.