This is the official implementation of Theme Transformer.
Checkout our demo and paper : Demo | arXiv
-
Clone this Repo
git clone https://github.com/atosystem/ThemeTransformer.git -b main --single-branch
-
using python version 3.6.8
-
install python dependencies:
pip install -r requirements.txt
python train.py --cuda
python inference.py --cuda --theme <theme midi file> --out_midi <output midi file>
.
├── ckpts For saving checkpoints while training
├── data_pkl Stores train and val data
│ ├── train_seg2_512.pkl
│ └── val_seg2_512.pkl
├── inference.py For generating music. (Detailed usage are written in the file)
├── logger.py For logging
├── mymodel.py The overal Theme Transformer Architecture
├── myTransformer.py Our transformer revision code
├── parse_arg.py Some arguments for training
├── preprocess For data preprocessing
│ ├── music_data.py Theme Transformer pytorch dataset definition
│ └── vocab.py Our vocabulary for transformer
├── randomness.py For fixing random seed
├── readme.txt Readme
├── tempo_dict.json The original tempo information from POP909 (used in inference time)
├── theme_files/ The themes from our testing set.
├── trained_model The model we trained.
│ └── model_ep2311.pt
└── train.py Code for training Theme Transformer
If you find this work helpful and use our code in your research, please kindly cite our paper:
@article{shih2022theme,
title={Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer},
author={Yi-Jen Shih and Shih-Lun Wu and Frank Zalkow and Meinard Müller and Yi-Hsuan Yang},
journal={IEEE Transactions on Multimedia},
year={2022},
publisher={IEEE}
}