tensorflow2-generative-models icon indicating copy to clipboard operation
tensorflow2-generative-models copied to clipboard

Implementations of a number of generative models in Tensorflow 2. GAN, VAE, Seq2Seq, VAEGAN, GAIA, Spectrogram Inversion. Everything is self contained in a jupyter notebook for easy export to colab.

Results 10 tensorflow2-generative-models issues
Sort by recently updated
recently updated
newest added
trafficstars

Hi there, I have tried running this code and I cannot get past the create model step, I've pasted the error below. Please let me know if you need more...

I have seen some errors on the[ **8.0-NSYNTH-iterator.ipynb**](https://colab.research.google.com/github/timsainb/tensorflow2-generative-models/blob/master/8.0-NSYNTH-iterator.ipynb) file which I guess was made because of some changes on the python modules by the upgrades. **Warning:** I have changed the...

Hi, I just found there are required packages but not included in `requirement.txt`. Could you guys kindly provide an updated one? Thanks.

In the network architecture of discriminator: `tf.keras.layers.Dense(units=1, activation="sigmoid"),` You don't need any activation function for discriminator, since you are using Wasserstein Loss. Using sigmoid here would greatly limit your convergeance...

I am getting this error on line number 21(model.train(example_data)). Anyone facing the same issue? ![image](https://user-images.githubusercontent.com/23742551/77819875-4afb1c00-7104-11ea-8c51-f9a5087636be.png)

The following is discriminator's loss in the GAIA notebook. Shouldn't the d_xg_loss be negative? `disc_loss = d_xg_loss + d_x_loss - tf.clip_by_value(d_xi_loss, 0, d_x_loss)`

In computing the testing loss for the autoencoder, you reuse the training set ``` # test on holdout loss = [] for batch, test_x in tqdm( zip(range(N_TRAIN_BATCHES), train_dataset), total=N_TRAIN_BATCHES #...

This isn’t an issue but I was wondering if you could implement the InfoGAN model in tensorflow 2.0 as well.