Replies: 1 comment 1 reply
-
Updating, I notice that the style and diffusion didn't start training, maybe this is the reason. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I am having some problems in my experiment and I would like to know if anyone have an ideia.
I am trying to finetune the LibriTTS model for portuguese. To achieve this, I am using the pretrained PLBERT model https://huggingface.co/papercup-ai/multilingual-pl-bert and the portuguese part of the CML-TTS dataset. Unfortunately, when I test the model, I get NaN output from the text encoder.
d = model.predictor.text_encoder(d, s, input_lengths, text_mask)
I am not sure if the model trained for sufficient steps, but since the training is expensive, I am not sure if I will continue. Here is a print of my training (second stage):
I have some hypotheses:
What do you guys think? I am investigating, but due to limited resources it is hard to test everything.
Beta Was this translation helpful? Give feedback.
All reactions