-
Notifications
You must be signed in to change notification settings - Fork 752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R-273] 'temperature' parameter in LangchainLLMWrapper.generate_text causing issues #656
Comments
Further I raised a PR to address the issue. #657 |
+1 getting same error for trying out Google gemini models through langchain-google-genai @Kirushikesh but removing the temperature arg impacts OpenAI behavior right? |
@joy13975, when initialising the OpenAI LLM we are providing the temperature right |
Someone have any update about this bug? |
Hey, @RazHadas There are two PRs raised on the same issue. You can check them out or wait till we merge them. |
This issue should probably not be closed without merging the fixes. I am facing the same issue using langchain-google-genai. |
thanks for bringing it to your attention @LostInCode404 , reopening this |
Was this issue fixed? I am also getting the same error when I use langchain-google-genai but it works fine for langchain_openai. Please help! |
Describe the bug
LangchainLLMWrapper has .generate_text() function which further calls .generate_prompt() of the underlying LLM. The LangchainLLMWrapper passes 'temperature' parameter in .generate_prompt() function which causes the following issues,
Since when initialising an LangChain LLM we can pass the temperature as a parameter, it is not needed to be supplied additionally in LangchainLLMWrapper.
For ex in HuggingFacePipeline, you can specify the temperature when initialization using:
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, temperature=1)
Or when using IBM LLM you can specify the temperature by:
Ragas version: 0.1.1
Python version: 3.10.6
Code to Reproduce
The following code explains why 'temperature' parameter not affecting the response in HuggingFaceLLM
In the above code I initialised the HuggingFacePipeline with gpt-2 model and wrapped it around ragas LangchainLLMWrapper and i was passing 'temperature=0' when calling .generate_text(), ideally this should generate error because 0 temperature is not accepted in HuggingFace.
You can also check by passing temperature as 99 in .generate_text() and its not raising any exception too for this high value of temperature. Thus its evident that temperature in .generate_text is not affecting the HuggingFace LLM. Also user can sent the temperature in pipeline() function so need to have an additional temperature in .generate_text() function.
The following code explains why passing 'temperature' raises an error in IBM LLM:
As the error trace explains that using Langchain wrapped IBM LLM doesn't support 'temperature' as an additional parameter in .generate_prompt() function. The error resolves when i didn't pass temperature parameter. The same error occurs when calling 'evaluate()' function in ragas with the same IBM LLM.
Expected behavior
A clear solution to this problem was to remove the temperature parameter in LangchainLLMWrapper
Additional context
Add any other context about the problem here.
R-273
The text was updated successfully, but these errors were encountered: