You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I would like to express my sincere gratitude to all the authors for their outstanding work and for open-sourcing such an excellent dataset. 👍 👍 👍
I am currently attempting to conduct some tests with RAG (Retrieval-Augmented Generation) on this dataset, and I have some confusion regarding the retrieval source that I hope the authors can help clarify.
In RAG, it is necessary to first chunk and index the Wikipedia pages that may be used for retrieval.
The paper's Section 3.1 mentions that all QA involves a total of 4,121 Wikipedia articles. Is this the complete retrieval source?
Or should I use the author-provided wikicache.tar.gz file (~9.43GB)? (Of course, this would consume a massive amount of Embedding Tokens and take a significant amount of time.)
Based on my understanding, this cached Wikipedia is preliminarily filtered out through BM25 based on the Question across all Wikipedia page Titles. I am not sure if this is correct.
Additionally, I would like to ask where I can directly download the actual 4,121 Wikipedia articles that are used.
The text was updated successfully, but these errors were encountered:
We recommend using the pip library provided by this repo to download Wikipedia articles as needed. The library will ensure that the text downloaded is the revision as of November 2023. The wikicache.tar.gz file can be used to prepopulate the cache for this library, but is optional - it's all of the files downloaded onto my machine when we were running experiments, not necessarily filtered in any way.
We don't recommend embedding all of Wikipedia - that would be prohibitively expensive for all but large organizations! Instead, your model should use a search tool to find relevant article titles, then retrieve from the text of individual articles returned by the search tool (i.e., index individual pages instead of the entire knowledge base).
First of all, I would like to express my sincere gratitude to all the authors for their outstanding work and for open-sourcing such an excellent dataset. 👍 👍 👍
I am currently attempting to conduct some tests with RAG (Retrieval-Augmented Generation) on this dataset, and I have some confusion regarding the retrieval source that I hope the authors can help clarify.
In RAG, it is necessary to first chunk and index the Wikipedia pages that may be used for retrieval.
The paper's Section 3.1 mentions that all QA involves a total of 4,121 Wikipedia articles. Is this the complete retrieval source?
Or should I use the author-provided wikicache.tar.gz file (~9.43GB)? (Of course, this would consume a massive amount of Embedding Tokens and take a significant amount of time.)
Based on my understanding, this cached Wikipedia is preliminarily filtered out through BM25 based on the Question across all Wikipedia page Titles. I am not sure if this is correct.
Additionally, I would like to ask where I can directly download the actual 4,121 Wikipedia articles that are used.
The text was updated successfully, but these errors were encountered: