wavegan
wavegan copied to clipboard
setup on colab
This Is my first time to run this project , after reading all your the requirement I decided to run your project on google colab, I choose colab because it gives me free GPU option. Is it ok to run on colab ? or guide me regarding to setting up this project THANK YOU!
Hi, I am also running it in Colab. It's pretty easy to manage running it there:
- git clone the project into colab:
!git clone https://github.com/chrisdonahue/wavegan.git
- Switch to the folder:
%cd wavegan/
- run the command to train the GAN:
!export CUDA_VISIBLE_DEVICES="0"
!python train_wavegan.py train /content/gan --data_dir /content/train
Here: content/gan is the output dir, content/train is the input dir (where your audio data lives).
Answered my question, thank you :)
Have you had success training on Google Colab?
When I try to run the code, it warns an error because of tensorflow version...would anyone here mind sharing their colab notebook?
Thanks!
I'm getting the tensorflow version error too.
If I use a version of Tensorflow 1, I get the error "tensorflow.data has no attribute: experimental" and if I use tensorflow 2 I get: tensorflow has no atrribute: placeholder.
Hi, wavegan was designed with TF 1.12.0 (as per documentation). However, I managed to run it using TF 1.15. You may want to try this on Colab before running your experiments:
%tensorflow_version 1.x
import tensorflow
Then check the version:
print(tensorflow.__version__)
If the version is still on TF 2.0 (Colab's default) then reset the environment.
Thank you! I got the training to work.
On my side I get the training to execute by installing Tensorflow 1.15 but it soon crashes, mentioning a couple of numpy modules not found. Could I see your running colab session?
Sent from my iPhone
On 9. Apr 2020, at 01:21, Tyler [email protected] wrote:
Thank you! I got the training to work.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
@moih Absolutely! https://colab.research.google.com/drive/1N0CtpO6VZvcyE72r3eEkBQthQsUlm6Hf
@Tylersuard thanks, working now for me! just curious, how are you managing to download the checkpoints?
I'm downloading the checkpoints manually, which is probably not the best way. There's a command in the docs for how to save them automatically though.
Here's a modified version of your notebook to include saving checkpoints to your google Drive directly: https://colab.research.google.com/drive/1oZBt78G2TSnRq4IY194dW3IfuVYNsT-B
Thank you sir!
@moih were you able to get generation to work after training the model?
Hi, only when I download the whole results folder and do the generation directly from my computer, not in the generator code provided as example... let me know if you can manage to do it yourself.
Sent from my iPhone
On 14. Apr 2020, at 05:13, Tyler [email protected] wrote:
@moih were you able to get generation to work after training the model?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:
- Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.
- Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses does not actually delete the files from your drive account. It puts them into your "trash" folder which still counts against your drive storage limit. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using PyDrive
- On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.
- Generation worked more or less with the exact example generation code in the Readme (making sure to use
%tensorflow_version 1.x
), the only modification needing to be made is changing the name of what checkpoint you're using, for example:saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')
where XXXX is the actual number checkpoint you're attempting to load - You can also run the provided backup script in another colab instance while connected to your drive
Hope this helps!
Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:
* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved. * Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/) * On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it. * Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example: `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')` where XXXX is the actual number checkpoint you're attempting to load * You can also run the provided backup script in another colab instance while connected to your drive
Hope this helps!
This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change
_G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z})
to
_G_z = sess.run(G_z, {z: _z})
And comment display(PIL.Image.fromarray(_G_z_spec[i]))
and it works perfectly :)
Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:
* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved. * Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/) * On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it. * Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example: `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')` where XXXX is the actual number checkpoint you're attempting to load * You can also run the provided backup script in another colab instance while connected to your drive
Hope this helps!
This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change
_G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z})
to_G_z = sess.run(G_z, {z: _z})
And comment
display(PIL.Image.fromarray(_G_z_spec[i]))
and it works perfectly :)
Still can't get the generation running. Would you mind sharing a notebook? :)