Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is CRAG inference in eval.py? #7

Open
zarekxu opened this issue Mar 13, 2024 · 4 comments
Open

Where is CRAG inference in eval.py? #7

zarekxu opened this issue Mar 13, 2024 · 4 comments

Comments

@zarekxu
Copy link

zarekxu commented Mar 13, 2024

Hi authors,

Thanks for the great job. I am a little bit confused about eval.py. I understand the data is fetched from eval_data/popqa_longtail_w_gs.jsonl. However, I could not find an inference code to generate the results before doing the metric calculation. Is there any code missing, or am I wrong?

Thanks.

@big-camel
Copy link

#6

@zarekxu
Copy link
Author

zarekxu commented Mar 14, 2024

Thanks for the replying. But it is not what I am looking for. I already got the json file and I want to feed it into CRAG and get the results.
Similar to pred, results, do_retrieve = generate(prompt, evidences, max_new_tokens=args.max_new_tokens,) Self-RAG:
image

I didn't find such generation function in your eval.py file. Could you tell me where it has been used to generate the results?

@big-camel
Copy link

You can use CRAG_Inference to generate a file, which should contain the answers to each question asked

You can then use the eval script to rate the responses

@zarekxu
Copy link
Author

zarekxu commented Mar 15, 2024

Got it! That makes sense. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants