Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Opening the Assistant to accept custom LLM #810

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

qarol
Copy link
Contributor

@qarol qarol commented Oct 3, 2024

  • Removed references to the private method in the Assistant’s specs
  • Use the LLM adapter in the Assistant instance whenever possible, instead of calling LLM directly

@andreibondarev
Copy link
Collaborator

@qarol Thank you for your PR. You'll need to upmerge it with main.

Use the LLM adapter in the Assistant instance whenever possible, instead of calling LLM directly

Would you please explain what the benefits is this are?

@qarol
Copy link
Contributor Author

qarol commented Oct 3, 2024

Hi @andreibondarev,
I think the Assistant needs a wrapped LLM object with a consistent interface that can handle all necessary actions. I'm not sure if the Assistant should know that some actions can be performed directly on the LLM object, while others require its adapter. I think we can rely solely on the adapter. IMO these changes could bring us closer to a setup where the Assistant depends on only one object, not two - potentially simplifying custom client injection.

In the next step, I was thinking of removing all the if's from the Assistant that determine the type of LLM object. Instead, we could move that logic into dedicated adapters, with each knowing how to handle things (what to do if instruction changes?).

I think it would be great to make the Assistant compatible with any LLM, not just the ones available in that gem.

@qarol
Copy link
Contributor Author

qarol commented Oct 11, 2024

Hey @andreibondarev, I've changed a little an approach by allowing to inject custom LLM adapter when Assistant is initialized. That brings more flexibility to use it with totally custom LLM which is not yet covered by this gem. I think it can bring some value and also doesn't introduces any breaking changes. Let me know what you think :)

@qarol qarol changed the title Preparation for opening the Assistant to accept custom LLM Opening the Assistant to accept custom LLM Oct 11, 2024
@andreibondarev
Copy link
Collaborator

@qarol Do you have a use-case where you're trying to hook in an LLM, that we don't yet support, working with the Assistant?

@qarol
Copy link
Contributor Author

qarol commented Oct 12, 2024

Hey @andreibondarev, I’m required to use a closed wrapper built on top of multiple LLMs. It includes some policies to prevent data leaks, etc. The abstraction that the Assistant provides is something I’d like to use without adding any monkey patches to the code. I hope this clarifies the goal a bit.

@qarol qarol force-pushed the main branch 5 times, most recently from 11c9444 to ec4f021 Compare October 12, 2024 22:00
@qarol
Copy link
Contributor Author

qarol commented Oct 12, 2024

@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?

@andreibondarev
Copy link
Collaborator

@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?

Sorry for the delay! I'm going to take a careful look at this tomorrow!

@andreibondarev
Copy link
Collaborator

@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?

So in order to make it work you'd need to pass both custom llm: and llm_adapter:, right? Anyways -- I'm all for it, I'd just like to figure out an elegant DSL for it.

@@ -37,6 +37,7 @@ class Assistant
# @param add_message_callback [Proc] A callback function (Proc or lambda) that is called when any message is added to the conversation
def initialize(
llm:,
llm_adapter: LLM::Adapter.build(llm),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this even work? Isn't the llm == nil here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can do it for keyword arguments because those arguments are evaluated from left to right. So the llm will not be equal nil in that case (unless you provide explicitly nil) :)

@qarol
Copy link
Contributor Author

qarol commented Oct 24, 2024

@andreibondarev I've rebased with main branch. It simplified my PR now. WDYT about adding a custom LLM adapter through constructor?

So in order to make it work you'd need to pass both custom llm: and llm_adapter:, right? Anyways -- I'm all for it, I'd just like to figure out an elegant DSL for it.

Yes, and agree that's not fully elegant yet.
Ideally, I would like to pass just one object that exposes an interface with all the required by assistant methods to run. That's why I tried to wrap llm with adapter and use only that object within assistant. No calls to llm directly. But this approach would bring at least one breaking change unpleasent to have:

llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
adapter = Langchain::Assistant::LLM::Adapters::OpenAI.new(llm: llm)
assistant = Langchain::Assistant.new(llm: adapter)

You would have to provide a wrapped llm instead of pure llm to assistant.

Different approach might be introducing dedicated factory for registered LLM:

module Langchain
  class Assistant
    def self.for(llm:)
      new(llm: LLM::Adapter.build(llm: llm))
    end

    def initialize(llm:)
      raise ArgumentError unless llm.is_a?(LLM::Adapters::Base)
      # do something
    end
  end
end

llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
assistant = Langchain::Assistant.for(llm: llm)

# or 

llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm)

Or it can be simplified to something like this (forgive me pseudocode):

module Langchain
  class Assistant
    def initialize(llm:)
      case llm
      when Langchain::LLM::Base
        @llm = Langchain::Assistant::LLM::Adapter.build(llm)
      when Langchain::Assistant::LLM::Adapters::Base
        @llm = llm
      else
        raise "Invalid LLM"
      end
    end
    
    # Use only @llm everywhere
  end
end

llm = CustomLLM.new(api_key: ENV["CUSTOM_LLM_KEY"])
assistant = Langchain::Assistant.new(llm: llm)

I think the last option would be the simplest 🤔

@qarol
Copy link
Contributor Author

qarol commented Oct 25, 2024

@andreibondarev I've implemented the latest concept. WDYT?

@andreibondarev
Copy link
Collaborator

andreibondarev commented Oct 29, 2024

@andreibondarev I've implemented the latest concept. WDYT?

@qarol Hmmm... these are pretty big changes. How much flexibility do you have to override and monkey-patch classes in your own application?

Could you overwrite this class and the build method to whatever you want? https://github.com/patterns-ai-core/langchainrb/blob/main/lib/langchain/assistant/llm/adapter.rb.

Also -- I think what's going to happen at some point is that these types of classes (e.g.: Langchain::LLM::OpenAI and Langchain::Assistant::LLM::Adapters::OpenAI) are going to be combined because there's overlap between them.

@qarol
Copy link
Contributor Author

qarol commented Nov 5, 2024

@andreibondarev thanks for your feedback. Opening an adapter's factory is an option too. I have something like that:

module Langchain
  class Assistant
    module LLM
      class Adapter
        class << self
          prepend(Module.new {
            def build(llm)
              if llm.is_a?(CustomLLM)
                CustomAdapter.new
              else
                super
              end
            end
          })
        end
      end
    end
  end
end

That builder can become also an AdapterRegistry or something.
If that's to big change, let's simplify it a little ;)

The Ollama LLM creates a message with system role for provided instructions in Assistant.
Keep the logic only in place that determines whether the instruction should be placed as a system message or not.
Added `llm_adapter` parameter to the `Assistant` class constructor. Allows initialization of the Assistant with custom language models and adapters. Ensures more flexibility in choosing and managing LLM integrations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants