wavegan
wavegan copied to clipboard
Can't run the project.
Hi,
I'm looking into making Audio GANs and this implementation seems really promising, clear and cool. But trying to run it, I ran into a really confusing error message:
Traceback (most recent call last):
File "train_wavegan.py", line 110, in <module>
sample_size=args['sample_size'])
File "C:\Users\Admin\Documents\GitHub\wavegantorch\wgan.py", line 194, in train_wgan
lmbda, use_cuda, compute_grads=True)
File "C:\Users\Admin\Documents\GitHub\wavegantorch\wgan.py", line 33, in compute_discr_loss_terms
D_real.backward(neg_one)
File "C:\Users\Admin\Anaconda3\envs\minimum\lib\site-packages\torch\tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\Admin\Anaconda3\envs\minimum\lib\site-packages\torch\autograd\__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]
Is this some API change in torch? Any ideas for a fix?
Thanks for your interest! I haven't seen this error before, but I suspect it's an API change. pytorch has gone through a number of fairly significant API changes over the past few months so it wouldn't be surprising if something has broken as a result. I'll let you know if I find anything.
Hi! I'm getting the same error! Do you have any suggestions? Which torch version have you used? Note: I'm using python 2.7
I got the same error on both Windows with Nvidia GPU, Cuda 9.x and macOS (trying to flag no cuda since that's an AMD external eGPU; OpenCL). Any suggestions @jtcramer ?
Hi ,Thanks for implementing!!! I am reading your code, and Could you please tell me whether you normalize the input data(waveform) to a certain range(e.g. -1 to 1) or you just use the original data? I did not find any code to normalize it, but from the original wavegan paper, the activation function of last layer of generator is tanh, so I am very curious about that
Thank you!
Hi! I'm getting the same error! Do you have any suggestions? Which torch version have you used? Note: I'm using python 2.7
I was able to fix the problem running it on torch 0.4.1 and adjusting the code where a float is returned instead of a 1-element list.
Getting a similar error:
Traceback (most recent call last):
File "train_wavegan.py", line 109, in
Getting a similar error: Traceback (most recent call last): File "train_wavegan.py", line 109, in sample_size=args['sample_size']) File "/content/wavegan/wgan.py", line 194, in train_wgan lmbda, use_cuda, compute_grads=True) File "/content/wavegan/wgan.py", line 33, in compute_discr_loss_terms D_real.backward(neg_one) File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 93, in backward grad_tensors = _make_grads(tensors, grad_tensors) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 29, in _make_grads
- str(out.shape) + ".") RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).
You can fix the problem as I did. Read my comment above
I was able to fix the problem running it on torch 0.4.1 and adjusting the code where a float is returned instead of a 1-element list.
I understand the first part. Would you please clarify the second part? Which lines of code did you change, and to what? Alternatively, could I see your corrected code?
I was able to fix the problem running it on torch 0.4.1 and adjusting the code where a float is returned instead of a 1-element list.
I understand the first part. Would you please clarify the second part? Which lines of code did you change, and to what? Alternatively, could I see your corrected code?
Unfortunately I'm no longer able to access my code and see exactly how I fix it. However, reading the error it's clear that while in the previous torch version something was a scalar, now it's a vector with 1 element. My guess is that you should add D_real.backward(neg_one)[0]
in line 33 of wgan.py
If anyone stumbles across this, it's a version problem, see here: https://discuss.pytorch.org/t/how-to-fix-mismatch-in-shape-when-using-backward-function/58865.
The solution seems to be to replace torch.FloatTensor([1])
with torch.tensor(1, dtype=torch.float)
This will also introduce a few further problems downstream where the cost is accessed as if it were an array. In addition for more recent versions of PyTorch, volatile
does not work anymore so those code fragments would have to be replaced with torch.no_grad()
blocks. If the author is still maintaining this repository I could create a pull request for compatibility with newer PyTorch versions.