-
Notifications
You must be signed in to change notification settings - Fork 753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removed the temperature parameter #657
Conversation
PR Description updated to latest commit (3653c5b) |
Hey @Kirushikesh thanks a lot for the PR bro. As a review, temperate does play a role in some of the ragas metrics like answer-relevancy, that's why we pass it. But given that some models don't allow this argument to be passed, I would prefer to ignore it for those particular models only rather than doing it for even models that have this capability (for example, OpenAI). |
Hello @shahules786, as far as i gone through this repo, i see that only "answer-relevancy" is using the n factor to generate multiple questions where the n determines the temperature, for the rest of metrics in ragas the n is kept as 1 or the default n is 1. Based on my research there is no straight forward solution for this problem, because its not simple to change "temperature" for all LangChain LLM's(like llm.temperature=x). Also, i understood for answer-relevancy the higher temperature results in more readable questions used for evaluation, but as i mentioned in the issue right, HuggingFace LLM doesn't capture the temperature which you pass in .generate(), it becomes a huge downside of not able to use this metric on Hf LLM. Another thing, i was wondering how the familiar LangChain and LlamaIndex team, handling this issue of changing temperature of the LLM in runtime, aparantely as far as i understood they are not doing this, once you initialize an LLM with temperature you don't change it during the inference, tell me if i am wrong. So the possible work around i see for this problem is,
Let me know your thoughts. But ig this is something need to be addressed as i was observing Ragas is getting familiar now and even user without OpenAI credentials should get most out of with these metrics. |
Hey @Kirushikesh thanks for the reply and the PR. Will consider this, while I also want to explore and think more about the influence of temperature. Will keep you posted. |
@Kirushikesh did you find a workaround ? otherwise I can't run the evaluation using gemini's model |
@saadbouhya no i have done the PR and move to other evaluation library unfortunately ;( |
hey @Kirushikesh that is very sad to hear 🙁 will get this issue solved soon also just curious which tool did you end up using? |
Hello @jjmachan , honestly i was not very keen on getting my PR merged, i was open for other solutions too. But there is no mitigations added by you guys for this problem yet, so ended up using SelfCheckGPT and Langchain built in evaluators. |
@Kirushikesh You can actually create a subClass of BaseRagasLLM, just copy the LangchainLLMWrapper used in ragas and remove temperature, should work. |
@saadbouhya thanks for suggesting that, you are right that would have been the easiest way to unblock! ❤️ btw @Kirushikesh and @saadbouhya we are conducting a roadmap discussion call soon, I would love to invite you guys if you are interested since we would love the feedback. You can see the rough draft #1009 . if you're interested do share your emails or discord IDs and I'll send over the invite |
@jjmachan thanks for inviting, would love to work with you guys and contribute my expertise to the repository. My email: [email protected] |
just send you one, do vote for the time here too https://rallly.co/invite/WOSpLw4oo9c9 🙂 |
closing this since it is fixed with v0.2, feel free to reopen if still there is something more 🙂 |
User description
addressing the issue #656
This PR removes all the occurrence of temperature keyword in LangChain LLM.
Description
temperature
parameter from theBaseRagasLLM
class methods to align with issue [R-273] 'temperature' parameter in LangchainLLMWrapper.generate_text causing issues #656.get_temperature
method as it is no longer needed.temperature
parameter from method signatures.Changes walkthrough
base.py
Remove temperature parameter from BaseRagasLLM
src/ragas/llms/base.py
get_temperature
method.temperature
parameter from various methods.generate
andagenerate_text
methods by removingtemperature handling logic.
conftest.py
Update test mocks to reflect temperature parameter removal
tests/conftest.py
generate_text
andagenerate_text
methods to remove thetemperature
parameter.test_llm.py
Update unit tests for temperature parameter removal
tests/unit/llms/test_llm.py
generate_text
andagenerate_text
methods to remove thetemperature
parameter.💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.