Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed the temperature parameter #657

Closed
wants to merge 1 commit into from

Conversation

Kirushikesh
Copy link

@Kirushikesh Kirushikesh commented Feb 24, 2024

User description

addressing the issue #656

This PR removes all the occurrence of temperature keyword in LangChain LLM.


Description


Changes walkthrough

Relevant files
Enhancement
base.py
Remove temperature parameter from BaseRagasLLM                                 

src/ragas/llms/base.py

  • Removed the get_temperature method.
  • Removed the temperature parameter from various methods.
  • Simplified the generate and agenerate_text methods by removing
    temperature handling logic.
  • +0/-17   
    Tests
    conftest.py
    Update test mocks to reflect temperature parameter removal         

    tests/conftest.py

  • Updated mock generate_text and agenerate_text methods to remove the
    temperature parameter.
  • +3/-3     
    test_llm.py
    Update unit tests for temperature parameter removal                       

    tests/unit/llms/test_llm.py

  • Updated mock generate_text and agenerate_text methods to remove the
    temperature parameter.
  • +3/-3     
    💡 Usage Guide

    Checking Your Pull Request

    Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

    Talking to CodeAnt AI

    Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

    /codeantai ask: Your question here
    

    This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

    @codeant-ai codeant-ai bot added the enhancement New feature or request label Feb 24, 2024
    Copy link

    codeant-ai bot commented Feb 24, 2024

    PR Description updated to latest commit (3653c5b)

    @shahules786
    Copy link
    Member

    Hey @Kirushikesh thanks a lot for the PR bro. As a review, temperate does play a role in some of the ragas metrics like answer-relevancy, that's why we pass it. But given that some models don't allow this argument to be passed, I would prefer to ignore it for those particular models only rather than doing it for even models that have this capability (for example, OpenAI).
    What do you think?

    @Kirushikesh
    Copy link
    Author

    Kirushikesh commented Feb 27, 2024

    Hello @shahules786, as far as i gone through this repo, i see that only "answer-relevancy" is using the n factor to generate multiple questions where the n determines the temperature, for the rest of metrics in ragas the n is kept as 1 or the default n is 1. Based on my research there is no straight forward solution for this problem, because its not simple to change "temperature" for all LangChain LLM's(like llm.temperature=x). Also, i understood for answer-relevancy the higher temperature results in more readable questions used for evaluation, but as i mentioned in the issue right, HuggingFace LLM doesn't capture the temperature which you pass in .generate(), it becomes a huge downside of not able to use this metric on Hf LLM.

    Another thing, i was wondering how the familiar LangChain and LlamaIndex team, handling this issue of changing temperature of the LLM in runtime, aparantely as far as i understood they are not doing this, once you initialize an LLM with temperature you don't change it during the inference, tell me if i am wrong. So the possible work around i see for this problem is,

    1. Since only answer-relevancy metric is using this temperature, can we change the prompt to generate all the n questions at once like this.
    2. Ask the user to supply different LLM for the answer relevancy with higher temperature and use that for question generation.
    3. Use the same LLM to generate n questions without changing any temperature(assuming user initialised the LLM with some non-zero temperature)(I recommend this because all the other libraries doing this).

    Let me know your thoughts. But ig this is something need to be addressed as i was observing Ragas is getting familiar now and even user without OpenAI credentials should get most out of with these metrics.

    @shahules786
    Copy link
    Member

    Hey @Kirushikesh thanks for the reply and the PR. Will consider this, while I also want to explore and think more about the influence of temperature. Will keep you posted.

    @saadbouhya
    Copy link

    @Kirushikesh did you find a workaround ? otherwise I can't run the evaluation using gemini's model

    @Kirushikesh
    Copy link
    Author

    @saadbouhya no i have done the PR and move to other evaluation library unfortunately ;(

    @jjmachan
    Copy link
    Member

    jjmachan commented Jun 4, 2024

    hey @Kirushikesh that is very sad to hear 🙁
    my appologies for not merging the PR sooner but the issue was that this is not a long term solution for this but I'm guessing it would have unblocked you.

    will get this issue solved soon

    also just curious which tool did you end up using?

    @Kirushikesh
    Copy link
    Author

    Kirushikesh commented Jun 5, 2024

    Hello @jjmachan , honestly i was not very keen on getting my PR merged, i was open for other solutions too. But there is no mitigations added by you guys for this problem yet, so ended up using SelfCheckGPT and Langchain built in evaluators.

    @saadbouhya
    Copy link

    @Kirushikesh You can actually create a subClass of BaseRagasLLM, just copy the LangchainLLMWrapper used in ragas and remove temperature, should work.

    @jjmachan
    Copy link
    Member

    jjmachan commented Jun 6, 2024

    @saadbouhya thanks for suggesting that, you are right that would have been the easiest way to unblock! ❤️

    btw @Kirushikesh and @saadbouhya we are conducting a roadmap discussion call soon, I would love to invite you guys if you are interested since we would love the feedback. You can see the rough draft #1009 .

    if you're interested do share your emails or discord IDs and I'll send over the invite

    @Kirushikesh
    Copy link
    Author

    @jjmachan thanks for inviting, would love to work with you guys and contribute my expertise to the repository. My email: [email protected]

    @jjmachan
    Copy link
    Member

    jjmachan commented Jun 6, 2024

    just send you one, do vote for the time here too https://rallly.co/invite/WOSpLw4oo9c9 🙂

    @jjmachan
    Copy link
    Member

    jjmachan commented Nov 3, 2024

    closing this since it is fixed with v0.2, feel free to reopen if still there is something more 🙂

    @jjmachan jjmachan closed this Nov 3, 2024
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    enhancement New feature or request
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    4 participants