WaveRNN
WaveRNN copied to clipboard
train tacotron error
Got everything setup as told in the readme guide. Got this error when running python train_tacotron.py. Has anybody encountered with this error?
Traceback (most recent call last):
File "train_tacotron.py", line 202, in <module>
main()
File "train_tacotron.py", line 98, in main
tts_train_loop(paths, model, optimizer, train_set, lr, training_steps, attn_example)
File "train_tacotron.py", line 126, in tts_train_loop
for i, (x, m, ids, _) in enumerate(train_set, 1):
File "F:\conda\envs\env_pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "F:\conda\envs\env_pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
w.start()
File "F:\conda\envs\env_pytorch\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "F:\conda\envs\env_pytorch\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "F:\conda\envs\env_pytorch\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "F:\conda\envs\env_pytorch\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "F:\conda\envs\env_pytorch\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_tts_datasets.<locals>.<lambda>'
C:\Users\user\Desktop\WaveRNN-master>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "F:\conda\envs\env_pytorch\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "F:\conda\envs\env_pytorch\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Same issue for me
Same issue for me
It looks like this is due to the fact that Python lambdas can't be pickled on Windows. I am currently trying to find a workaround, but if none exists, then I guess we have to do it on linux.
I was able to fix this by changing this:
In dataset.py
change DataLoader( ... num_workers=1 )
to DataLoader( ... num_workers=0 )
@serg06 thank you so much, fixed it for me
thanks @serg06 for helping out here :slightly_smiling_face:
Glad to help 😄