BigGAN-pytorch icon indicating copy to clipboard operation
BigGAN-pytorch copied to clipboard

Class_id error

Open yejg2017 opened this issue 5 years ago • 4 comments

Thanks to your share code.And I always get the error:TypeError: forward() missing 1 required positional argument: 'class_id'.Could you tell me the solution?

yejg2017 avatar Dec 03 '18 09:12 yejg2017

Could you paste the error messages?

sxhxliang avatar Dec 04 '18 08:12 sxhxliang

I get this while making any use of the WGAN-GP loss (which forced me to use hinge loss):

Traceback (most recent call last):
  File "main.py", line 45, in <module>
    main(config)
  File "main.py", line 37, in main
    trainer.train()
  File "/home/gwern/src/BigGAN-pytorch-2/trainer.py", line 150, in train
    out = self.D(interpolated)
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 123, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
    raise output
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 53, in _worker
    output = module(*input, **kwargs)
  File "/home/gwern/bin/miniconda2/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'class_id'

As far as I can tell, this is due to a minor typo in the WGAN-GP section of trainer.py:

@@ -147,7 +147,7 @@ class Trainer(object):
                 # Compute gradient penalty
                 alpha = torch.rand(real_images.size(0), 1, 1, 1).to(self.device).expand_as(real_images)
                 interpolated = Variable(alpha * real_images.data + (1 - alpha) * fake_images.data, requires_grad=True)
-                out = self.D(interpolated)
+                out = self.D(interpolated, z_class)^M

D(interpolated) can't be right because it's missing an argument entirely? All the other uses use the provided classes, so it seems like it should be a simple fix...

I began retraining my existing hinge-loss based faces BigGAN with this fix a few minutes ago. We'll see if my fix works.

gwern avatar Dec 12 '18 14:12 gwern