Skip to content

Latest commit

 

History

History
46 lines (26 loc) · 2.01 KB

README.md

File metadata and controls

46 lines (26 loc) · 2.01 KB

Reimplementation of speech decoding paper by MetaAI

Paper: https://arxiv.org/pdf/2208.12266.pdf

Status

Works for Gwilliams2022 dataset and Brennan2018 dataset.

TODOs

  • Full reproducibility support. Will be useful for HP tuning.
  • Match accuracy to numbers reported in the paper.
  • Work with huge memory consumption issue in Gwilliams multiprocessing

Usage

For EEG (Brennan et al. 2022)

Run python train.py dataset=Brennan2018 rebuild_datasets=True. When rebuild_datasets=False, existing pre-processed M/EEG and pre-computing embeddings are used. This is useful if you want to run the model on exactly the same data and embeddings several times. Otherwise, the both audio embeddings are pre-computed and M/EEG data are pre-processed before training begins.

For MEG (Gwilliams et al. 2022)

Run python train.py dataset=Gwilliams2022 rebuild_datasets=True When rebuild_datasets=False, existing pre-processed M/EEG and pre-computing embeddings are used. This is useful if you want to run the model on exactly the same data and embeddings several times. It takes ~30 minutes to pre-process Gwilliams2022 and compute embeddings on 20 cores. Set rebuild_datasets=False for subsequent runs (or don't specify it, becuase by default rebuild_datasets=False). Otherwise, the both audio embeddings are pre-computed and M/EEG data are pre-processed before training begins.

Monitoring training progress with W&B

To do that, set entity and project in the wandb section of config.yaml.

Datasets

Gwilliams et al., 2022

Brennan et al., 2019

You will need S01.mat to S49.mat placed under data/Brennan2018/raw/ and audio.zip unzipped to data/Brennan2018/audio/ to run the code.