This project attempts to solve the BoolQ task from the SuperGLUE benchmark using various approaches. For instructions on how to run the various approaches please refer to the readme file of the respective approaches.
In this approach we attempt to solve the BoolQ task using pre-trained language models like BERT, RoBERTa, etc along with soem custom neural network models and also try to use textual entailment as a move to improve the performance.
This approach is an extension to Approach1 wherein, along with the pre-trained models and entailment we try to integrate entity linking to gather some useful information that could help answer the question more accurately.
In this approach we try to integrate entity linking, numerous NLP technqiues and toolkits to extract more information about the context of the passages and questions and augment our dataset with new information as an attempt to improving the performance of the pre-trained language models on the BoolQ task.