Voice-Cloning-App
Voice-Cloning-App copied to clipboard
Error in Remote Training
INFO:root:Setting batch size to 38, learning rate to 0.0003082207001484488. (14GB GPU memory free)
INFO:root:Loading model...
INFO:root:Loaded model
INFO:root:Loading data...
56 train files, 14 test files
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-17-9bb687479d52>](https://localhost:8080/#) in <cell line: 9>()
7 symbols = load_symbols(os.path.join(alphabet_directory, alphabet.value)) if alphabet.value else DEFAULT_ALPHABET
8 checkpoint_path = os.path.join(checkpoint_directory, dataset.value, checkpoint.value) if checkpoint.value else None
----> 9 train(
10 metadata_path=metadata,
11 dataset_directory=wavs,
3 frames
[/content/Voice-Cloning-App/training/tacotron2_model/stft.py](https://localhost:8080/#) in __init__(self, filter_length, hop_length, win_length, window)
67 # get window and zero center pad it to filter_length
68 fft_window = get_window(window, win_length, fftbins=True)
---> 69 fft_window = pad_center(fft_window, filter_length)
70 fft_window = torch.from_numpy(fft_window).float()
71
TypeError: pad_center() takes 1 positional argument but 2 were given
same here. don´t know what to do since local always said CUDA memory error. Remote is the way to go
Having the same issue here, any fix?
Change librosa to version 0.8.1 that has the right signature for pad_center I guess:
http://librosa.org/doc-playground/0.8.1/generated/librosa.util.pad_center.html