A program for translating the AIDA CoNLL-YAGO dataset to use Wikidata QIDs instead of Wikipedia titles for entity identifiers.
You can find the pregenerated dataset on Huggingface (March 1, 2023).
If you want to regenerate the dataset with fresh Wikipedia/Wikidata mappings, you can build aida-conll-yago-wikidata
from source by running the following command:
cargo build --release
aida-conll-yago-wikidata
uses the mappings between Wikipedia titles and Wikidata QIDs generated by wiki2qid. Follow the instructions to generate the Apache Avro file containing the mappings first.
For convenience, the original AIDA CoNLL-YAGO dataset is given in data/AIDA-YAGO2-dataset.tsv
.
Once you have the necessary mappings, you can generate the dataset with the following command:
cargo run --release -- \
--input-conll data/AIDA-YAGO2-dataset.tsv \
--input-wiki2qid "${MAPPINGS_FILE}" \
--output-dir "${OUTPUT_DIR}"
This will create 3 files named train.parquet
, validation.parquet
, and test.parquet
in the directory specified by ${OUTPUT_DIR}
.
The outputs are written into zstd compressed Apache Parquet files. You can see the details of the schema on Huggingface.