taming-transformers
taming-transformers copied to clipboard
'VQModel' object has no attribute 'be_unconditional'
Following @rom1504 's instructions for a custom dataset. I have no classes and have trained a model that is producing reconstructions. When attempting to sample by running sample_fast.py, I am receiving this error.
Working with z of shape (1, 256, 16, 16) = 65536 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
Logging to logs/2021-06-25T12-27-06_custom_vqgan/samples/top_k_250_temp_1.00_top_p_1.0/234637
Traceback (most recent call last):
File "scripts/sample_fast.py", line 262, in <module>
run(logdir, model, opt.batch_size, opt.temperature, opt.top_k, unconditional=model.be_unconditional,
File "/home/virginia/dalle/dall2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 778, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'VQModel' object has no attribute 'be_unconditional'
It seems like I got the same issue!
I have trained a custom VQGAN model on a custom dataset using the config custom_vqgan.yaml
only with training_images_list_file
and test_images_list_file
modified. And the reconstructions are good.
But when sampling using any of these scripts make_samples.py
, sample_conditional
, or sample_fast.py
with the logs path provided, as introduced in the readme, there are errors! When I dig in, I found that these errors occur because there are no corresponding attributes claimed in the VQModel
in vqgan.py,
while they are claimed in the Net2NetTransformer
in cond_transformer.py
.
My vqgan model is trained by python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,1
following @rom1504
- when running
python sample_conditional.py -r ../logs/2021-06-28T22-32-47_custom_vqgan/ --outdir ../results
I got errors:
Working with z of shape (1, 256, 16, 16) = 65536 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
Traceback (most recent call last):
File "/home/michael/vqgan-clip/taming-transformers/scripts/sample_conditional.py", line 355, in <module>
run_conditional(model, dsets)
File "/home/michael/.conda/envs/vqgan/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/michael/vqgan-clip/taming-transformers/scripts/sample_conditional.py", line 70, in run_conditional
x = model.get_input("image", example).to(model.device)
File "/home/michael/vqgan-clip/taming-transformers/taming/models/vqgan.py", line 77, in get_input
x = batch[k]
TypeError: string indices must be integers
It turns out that the .get_input
of class class VQModel(pl.LightningModule)
is defined as
def get_input(self, batch, k):
x = batch[k]
if len(x.shape) == 3:
x = x[..., None]
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format)
return x.float()
the variables"image", example
passed in are wrong. It should be def get_input(self, k, batch)
However this is correct in the class Net2NetTransformer(pl.LightningModule)
module.
- when running
python make_samples.py.py -r ../logs/2021-06-28T22-32-47_custom_vqgan/ --outdir ../results
got similar error
Working with z of shape (1, 256, 16, 16) = 65536 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
Missing Keys in State Dict: []
Unexpected Keys in State Dict: []
Global step: 107168
Writing samples to ../results/107168_100_1.0
Dataset: CustomTrain
0%| | 0/12960 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/michael/vqgan-clip/taming-transformers/scripts/make_samples.py", line 292, in <module>
run_conditional(model, dsets, outdir, opt.top_k, opt.temperature)
File "/home/michael/.conda/envs/vqgan/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/michael/vqgan-clip/taming-transformers/scripts/make_samples.py", line 31, in run_conditional
x = model.get_input("image", example).to(model.device)
File "/home/michael/vqgan-clip/taming-transformers/taming/models/vqgan.py", line 77, in get_input
x = batch[k]
TypeError: string indices must be integers
- when running
python sample_fast.py.py -r ../logs/2021-06-28T22-32-47_custom_vqgan/ --outdir ../results
got same error as @ChristianFJung
So how to fix it? If we cannot set the model's target to be taming.models.vqgan.VQModel
in the config? Did I do something wrong?
Much appreciated for your help!
hi, if you want to sample with this model you need to train a transformer model too, not only the vqgan. some more details in this comment https://github.com/CompVis/taming-transformers/pull/54#issuecomment-868682255 I think the errors you're getting are due to providing the wrong model kind for the sample script
Following @rom1504 's instructions for a custom dataset. I have no classes and have trained a model that is producing reconstructions. When attempting to sample by running sample_fast.py, I am receiving this error.
Working with z of shape (1, 256, 16, 16) = 65536 dimensions. loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth VQLPIPSWithDiscriminator running with hinge loss. Logging to logs/2021-06-25T12-27-06_custom_vqgan/samples/top_k_250_temp_1.00_top_p_1.0/234637 Traceback (most recent call last): File "scripts/sample_fast.py", line 262, in <module> run(logdir, model, opt.batch_size, opt.temperature, opt.top_k, unconditional=model.be_unconditional, File "/home/virginia/dalle/dall2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 778, in __getattr__ raise ModuleAttributeError("'{}' object has no attribute '{}'".format( torch.nn.modules.module.ModuleAttributeError: 'VQModel' object has no attribute 'be_unconditional'
Hi, Did you solve this problem? I have the same one and I didn't understand how to fix it.
I am dealing with the same problem.