tacotron2-tts-GUI
tacotron2-tts-GUI copied to clipboard
Text To Speech (TTS) GUI wrapper for NVIDIA Tacotron 2+Waveglow. For custom Twitch TTS.
GUI Work in Progress (update 4 August 2020)
GUI wrapper for synthesize. Allows CPU-only synthesis via a toggleable switch. Portable exe file is available (that runs on CPU only).
Also plays TTS donations alerts from Stream Elements.
Main UI | Stream Elements integration |
---|---|
![]() |
![]() |
Overview
A machine learning based Text to Speech program with a user friendly GUI. Target audience include Twitch streamers or content creators looking for an open source TTS program. The aim of this software is to make tts synthesis accessible offline (No coding experience, gpu/colab) in a portable exe.
Features
- Reads donations from Stream Elements automatically
- PyQt5 wrapper for NVIDIA/tacotron2 & /waveglow
Download Link
A portable executable can be found at the Releases page, or directly here. Download a pretrained Tacotron 2 and Waveglow model from below.
Warning: the portable executable runs on CPU which leads to a >10x speed slowdown compared to running it on GPU.
Building from source
Requirements
- Python >=3.7
- librosa
- numpy
- PyQt5==5.15.0
- requests
- tqdm
- matplotlib
- scipy
- num2words
- pygame
To Run
python gui.py
License
- NVIDIA/tacotron2 & waveglow: BSD-3-Clause License
Notes
- TTS code from NVIDIA/tacotron2
- Partial GUI code from https://github.com/CorentinJ/Real-Time-Voice-Cloning and layout inspired by u/realstreamer's Forsen TTS https://www.youtube.com/watch?v=kL2tglbcDCo
Original Repo:
Tacotron 2 (without wavenet)
PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.
This implementation includes distributed and automatic mixed precision support and uses the LJSpeech dataset.
Distributed and Automatic Mixed Precision support relies on NVIDIA's Apex and AMP.
Visit our website for audio samples using our published Tacotron 2 and WaveGlow models.
Pre-requisites
- NVIDIA GPU + CUDA cuDNN
Setup
- Download and extract the LJ Speech dataset
- Clone this repo:
git clone https://github.com/NVIDIA/tacotron2.git
- CD into this repo:
cd tacotron2
- Initialize submodule:
git submodule init; git submodule update
- Update .wav paths:
sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
- Alternatively, set
load_mel_from_disk=True
inhparams.py
and update mel-spectrogram paths
- Alternatively, set
- Install PyTorch 1.0
- Install Apex
- Install python requirements or build docker image
- Install python requirements:
pip install -r requirements.txt
- Install python requirements:
Training
-
python train.py --output_directory=outdir --log_directory=logdir
- (OPTIONAL)
tensorboard --logdir=outdir/logdir
Training using a pre-trained model
Training using a pre-trained model can lead to faster convergence By default, the dataset dependent text embedding layers are ignored
- Download our published Tacotron 2 model
-
python train.py --output_directory=outdir --log_directory=logdir -c tacotron2_statedict.pt --warm_start
Multi-GPU (distributed) and Automatic Mixed Precision Training
-
python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True
Inference demo
- Download our published Tacotron 2 model
- Download our published WaveGlow model
-
jupyter notebook --ip=127.0.0.1 --port=31337
- Load inference.ipynb
N.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2 and the Mel decoder were trained on the same mel-spectrogram representation.
Related repos
WaveGlow Faster than real time Flow-based Generative Network for Speech Synthesis
nv-wavenet Faster than real time WaveNet.
Acknowledgements
This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.
We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.
We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.