-
Notifications
You must be signed in to change notification settings - Fork 752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ModuleNotFoundError: No module named 'ragas.langchain' #571
Comments
Hey @xinyang-handalindah , ragas 0.1 does not yet have this feature. We are working on it, for now you have two options
Also mentioned in #567 |
Do we have a pathway to migrate to the new API on
Currently here's the guidance on the blogpost. from langchain.smith import RunEvalConfig, run_on_dataset
evaluation_config = RunEvalConfig(
custom_evaluators=[eval_chains.values()],
prediction_key="result",
)
result = run_on_dataset(
client,
dataset_name,
create_qa_chain,
evaluation=evaluation_config,
input_mapper=lambda x: x,
) Now it's completely detached, as shown here from datasets import load_dataset
from ragas.metrics import context_precision, answer_relevancy, faithfulness
from ragas import evaluate
fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
result = evaluate(
fiqa_eval["baseline"].select(range(3)),
metrics=[context_precision, faithfulness, answer_relevancy],
) |
we are actually tracking this here #567 in the meantime you could
|
@jjmachan ,I was facing the same problem, i even tried with ragas v0.0.22 but getting an different error
|
Hi @xinyang-handalindah and @Kirushikesh, I fixed it by installing If you use |
@18abhi89, @Kirushikesh @dmpiergiacomo - hey 🙂 I would recommend using the v0.1 for the latest metrics but I know without langchain support your stuck there. for the time being this notebook has the work around code basically it is # run langchain RAG
answers = []
contexts = []
for question in test_questions:
response = retrieval_augmented_qa_chain.invoke({"question" : question})
answers.append(response["response"].content)
contexts.append([context.page_content for context in response["context"]])
# make HF dataset
from datasets import Dataset
response_dataset = Dataset.from_dict({
"question" : test_questions,
"answer" : answers,
"contexts" : contexts,
"ground_truth" : test_groundtruths
})
# now you can run the evaluations can you try it out and see if it solves the problem? I'm really sorry about the delay guys - will get this sorted as fast as we can |
@Kirushikesh I was getting the same error. |
@dmpiergiacomo which LLM are you using and what is the error? |
@jjmachan I tried to execute your suggestion and I get the following error. Because of the recursion, this cost me ~$490 in API calls before the error message, its using GPT4 not turbo, just to warn others tempted to use this
|
I'm getting the same error on Ragas 0.1.7 , is it still nescessery to downgrade to 0.0.22 for this to work? |
Getting the same error on Ragas 0.1.7 |
You are a saint! |
with
where |
with ragas==0.1.14, I made it work by
The context_relevancy should be changed to context_precision because I do not see any context_relevancy module in ragas folder, and context_precision seems to be the same idea as context_relevancy. |
Hey @dqminhv thanks for pitching in my friend. You're right regarding context_relevancy, it was depreciated since 0.1 in favor of context_precision and was removed recently. If you are looking for a reference-free metric to evaluate retrieval accuracy checkout context_utilization @jjmachan is this issue resolved? |
Description of the bug
Using Google Colab. After running
!pip install ragas
, unable to import RagasEvaluatorChain from ragas.langchain.evalchain.It was okay last week (v0.0.22).
Ragas version: 0.1.0
Python version: 3.10.12
Code to Reproduce
Error trace
Expected behavior
Last Week I was still able to import the "RagasEvaluatorChain" from ragas.langchain.evalchain, but encounter this error today.
The text was updated successfully, but these errors were encountered: