Custom implementation of the Nvidia WaveGlow model by Prender et al. using Tensorflow 2.0. You can find audio samples here.
Download repository. Create a virtualenv and install the required packages. Create default directories:
git clone [email protected]:vatj/waveglow-tensorflow2.git
cd waveglow-tensorflow2
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
mkdir data logs checkpoints data/float32 logs/float32 checkpoints/float32
Set a path to dowload the LJSpeech Dataset in the scripts/hparams.py configuration file by modifying the data_dir entry and the floating point precision to use for training by editing the ftype entry. Run data preprocessing script (alternatively, one can run the full notebook in jupyter). Note that preprocessing will need to be run again to train with a different float type. Run the training script.
python scripts/raw_ljspeech_to_tfrecords.py
python scripts/training_main.py
Use tensorboard in a notebook to monitor training or run tensorboard directly from the command line
jupyter-notebook notebooks/control_tensorboard.ipynb
# or
tensorboard --log-dir ./logs/float32
- Enable cloud TPU support
- Fixing half-precision issues. It seems like computing determinant in half precision is unstable in current implementation.
- Add metric to the training loop e.g. mean loss over epoch