ExplainingAI

Results 50 comments of ExplainingAI

If you didnt change any parameters then that means the autoencoder ran for only 20 epochs and the discriminator didnt even start because the config has the start of discriminator...

Also the ldm epochs are set at 100 epochs but this was for celebhq dataset with 30000 images. I would suggest in the current setting with 100 images, you should...

"but I just want to make sure like my dataset is simple it don't include type of variations I want like snow or dust" I didnt get this part. Could...

But if the model has never seen what 'snow' looks like anytime during the training, it will not be able to generate 'snow on walls' right ?

Yes pre-trained model would work because that has seen what 'snow' looks like. But this model will be trained from scratch. So I would suggest to either use the pre-trained...

This part of the readme is just saying that the dataset class must return a tuple of image tensor and a dictionary of conditional inputs. And for class conditional case,...

I think it would benefit by training the autoencoder more. Specifically two changes: 1. autoencoder_epochs:1000 2. disc_start : 200 x (Number of steps in one epoch) Basically train for longer...

Yes the ddpm_ckpt_text_cond_clip.pth is overwritten every time you run the training.

Hello @vavasthi, For generating larger resolution images, you would need significant compute at your disposal. But assuming you have that, the actual change is only in one key which is...

If you haven't changed batch size then can you try reducing the batch sizes for both the auto encoder and ldm using the following: ``` train_params: ldm_batch_size: 16 changed to...