Jason
Jason
@borisfom @titu1994 Why does this script even exist in the repo?
> Please hold off merge of this until we make sure neural type information is included in nemo files. @ryanleary @junkin Should we try to merge this into NeMo? Or...
In general, the PR LGTM. But I want @MikyasDesta to review/approve
Please note that the version number found at https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/tts_en_fastpitch is aligned with the NeMo version number. FastPitch 1.4.0 was last updated with NeMo version number 1.4.0. It is not a...
Are you interested in [gst_embedding](https://github.com/NVIDIA/OpenSeq2Seq/blob/master/open_seq2seq/encoders/tacotron2_encoder.py#L501) and [token_embeddings](https://github.com/NVIDIA/OpenSeq2Seq/blob/master/open_seq2seq/encoders/tacotron2_encoder.py#L503) from [_embed_style()](https://github.com/NVIDIA/OpenSeq2Seq/blob/master/open_seq2seq/encoders/tacotron2_encoder.py#L341)? You would have to provide those tensors to sess.run(); eg sess.run([gst_embedding, token_embeddings). The easiest way would probably to change [get_interactive_infer_results()](https://github.com/NVIDIA/OpenSeq2Seq/blob/02d79ad3106398b88ee4e885dc5639f3fcc28ee2/open_seq2seq/utils/utils.py#L457) such...
Can you confirm that the code is running on GPU and not CPU?
Tacotron gst is known to hang on CPU #439. Although another user has reported errors on Google colab #476. Unfortunately, just like the CPU issue, we most likely will not...
In OpenSeq2Seq, you can specify either `num_epochs` or `max_steps`. You are correct that if you continue_learning, then the epoch calculation would be off, but I would advise you to try...
Since learning rate scheduler is calculated based on current step number, I would just plot the learning rate on tensorboard to make sure that it is what you expect it...
Can you redo all experiments and remove the learning rate policy? Remove poly decay and use a fixed learning rate.