You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for putting together this amazing instruction. It is easy to follow and works smooth. I have no problem getting the output in run_QL.ipynb. However, I find them hard to understand:
I get the middle column is the ranking score of some sort.
The first column seems like the clueweb09 index and the only information source for computing ranking performance, but how do I get the relevance labels of them for the query 1-1-1?
Can you please explain the output dataframe, especially what the columns mean and how to compute the ranking metrics?
Thank you very much!
The text was updated successfully, but these errors were encountered:
The qrel can be made easily from the qrels published by TREC Web track. For each query, you have the topic and facet ID and can use that to get the relevance judgements of TREC Web track.
With the qrels, you can easily compute the metrics using trec_eval or pytrec_eval.
Hi,
Thank you for putting together this amazing instruction. It is easy to follow and works smooth. I have no problem getting the output in
run_QL.ipynb
. However, I find them hard to understand:1-1-1
?Can you please explain the output dataframe, especially what the columns mean and how to compute the ranking metrics?
Thank you very much!
The text was updated successfully, but these errors were encountered: