Hao

Results 146 comments of Hao

three solutions: 1. find a machine with more memory 2. smaller batch size 3. reimplement the data loading part with tensorflow dataset API hope it helps.

this paper can answer your question https://arxiv.org/pdf/1705.03820.pdf

it is GAN collapse, the default dataset and training setting in this repo should be fine.

You need to understand GAN-CLS before reading this code

please report you TF and TL version

it seems the npz is not match with your network architecture.

I didn't provide pre-trained model for this repo

if you use my code to train the model, and use the same code to load npz, but it fails, I don't have any idea about it ... maybe you...

yes we did. to train on different dataset, just simply write your own `data_loader.py` here is an exmple of data loader for bird dataset. enjoy ``` if xxx ... elif...

this method is quite old, there should be some methods designed for sign language datasets.