wavegan icon indicating copy to clipboard operation
wavegan copied to clipboard

setup on colab

Open MAnal0025 opened this issue 5 years ago • 18 comments

This Is my first time to run this project , after reading all your the requirement I decided to run your project on google colab, I choose colab because it gives me free GPU option. Is it ok to run on colab ? or guide me regarding to setting up this project THANK YOU!

MAnal0025 avatar Dec 14 '19 12:12 MAnal0025

Hi, I am also running it in Colab. It's pretty easy to manage running it there:

  • git clone the project into colab: !git clone https://github.com/chrisdonahue/wavegan.git
  • Switch to the folder: %cd wavegan/
  • run the command to train the GAN:
!export CUDA_VISIBLE_DEVICES="0"
!python train_wavegan.py train /content/gan --data_dir /content/train

Here: content/gan is the output dir, content/train is the input dir (where your audio data lives).

jvel07 avatar Jan 29 '20 15:01 jvel07

Answered my question, thank you :)

Tylersuard avatar Mar 26 '20 06:03 Tylersuard

Have you had success training on Google Colab?

When I try to run the code, it warns an error because of tensorflow version...would anyone here mind sharing their colab notebook?

Thanks!

moih avatar Apr 08 '20 13:04 moih

I'm getting the tensorflow version error too.

Tylersuard avatar Apr 08 '20 15:04 Tylersuard

If I use a version of Tensorflow 1, I get the error "tensorflow.data has no attribute: experimental" and if I use tensorflow 2 I get: tensorflow has no atrribute: placeholder.

Tylersuard avatar Apr 08 '20 15:04 Tylersuard

Hi, wavegan was designed with TF 1.12.0 (as per documentation). However, I managed to run it using TF 1.15. You may want to try this on Colab before running your experiments: %tensorflow_version 1.x import tensorflow Then check the version: print(tensorflow.__version__) If the version is still on TF 2.0 (Colab's default) then reset the environment.

jvel07 avatar Apr 08 '20 17:04 jvel07

Thank you! I got the training to work.

Tylersuard avatar Apr 08 '20 23:04 Tylersuard

On my side I get the training to execute by installing Tensorflow 1.15 but it soon crashes, mentioning a couple of numpy modules not found. Could I see your running colab session?

Sent from my iPhone

On 9. Apr 2020, at 01:21, Tyler [email protected] wrote:

 Thank you! I got the training to work.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

moih avatar Apr 09 '20 07:04 moih

@moih Absolutely! https://colab.research.google.com/drive/1N0CtpO6VZvcyE72r3eEkBQthQsUlm6Hf

Tylersuard avatar Apr 09 '20 07:04 Tylersuard

@Tylersuard thanks, working now for me! just curious, how are you managing to download the checkpoints?

moih avatar Apr 09 '20 08:04 moih

I'm downloading the checkpoints manually, which is probably not the best way. There's a command in the docs for how to save them automatically though.

Tylersuard avatar Apr 09 '20 14:04 Tylersuard

Here's a modified version of your notebook to include saving checkpoints to your google Drive directly: https://colab.research.google.com/drive/1oZBt78G2TSnRq4IY194dW3IfuVYNsT-B

moih avatar Apr 10 '20 13:04 moih

Thank you sir!

Tylersuard avatar Apr 10 '20 18:04 Tylersuard

@moih were you able to get generation to work after training the model?

Tylersuard avatar Apr 14 '20 03:04 Tylersuard

Hi, only when I download the whole results folder and do the generation directly from my computer, not in the generator code provided as example... let me know if you can manage to do it yourself.

Sent from my iPhone

On 14. Apr 2020, at 05:13, Tyler [email protected] wrote:

 @moih were you able to get generation to work after training the model?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

moih avatar Apr 14 '20 08:04 moih

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

  • Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.
  • Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses does not actually delete the files from your drive account. It puts them into your "trash" folder which still counts against your drive storage limit. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using PyDrive
  • On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.
  • Generation worked more or less with the exact example generation code in the Readme (making sure to use %tensorflow_version 1.x), the only modification needing to be made is changing the name of what checkpoint you're using, for example: saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX') where XXXX is the actual number checkpoint you're attempting to load
  • You can also run the provided backup script in another colab instance while connected to your drive

Hope this helps!

ben-hawks avatar May 14 '20 05:05 ben-hawks

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.

* Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/)

* On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.

* Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example:
  `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')`
  where XXXX is the actual number checkpoint you're attempting to load

* You can also run the provided backup script in another colab instance while connected to your drive

Hope this helps!

This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change _G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z}) to _G_z = sess.run(G_z, {z: _z})

And comment display(PIL.Image.fromarray(_G_z_spec[i])) and it works perfectly :)

mikemech83 avatar Jan 31 '21 15:01 mikemech83

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.

* Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/)

* On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.

* Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example:
  `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')`
  where XXXX is the actual number checkpoint you're attempting to load

* You can also run the provided backup script in another colab instance while connected to your drive

Hope this helps!

This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change _G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z}) to _G_z = sess.run(G_z, {z: _z})

And comment display(PIL.Image.fromarray(_G_z_spec[i])) and it works perfectly :)

Still can't get the generation running. Would you mind sharing a notebook? :)

pryda-snare avatar Aug 04 '21 20:08 pryda-snare