SerdarHELLI
SerdarHELLI
Yeap :) I dont know how you gonna do . But it's okey
@tilakrayal Thanks :D . But after one year, I got your response so I forgot my issue .
> Hey @zhihongp, thanks for catching this! I have just added the VQGAN loss in [f13bf9b](https://github.com/CompVis/latent-diffusion/commit/f13bf9bf463d95b5a16aeadd2b02abde31f769f8). It is the same as in the taming-transformers repo, but provides some additional information...
You can try to delete predicted_indices on autoencoder.py. I did on this https://github.com/SerdarHelli/latent-diffusion/blob/main/ldm/models/autoencoder.py . Then, probably you will get an error about version which is undefined. I just deleted. :D...
@otamic Wow . that's nice. Can u share your dataloade code ? I want be sure about something. I will write my own :D
Yes thanks @otamic https://github.com/CompVis/taming-transformers/blob/master/taming/data/sflckr.py I was searching this one actually. I know they wrote, but I didnt check out it :D
> @otamic I have trained semantic synthesis 255 on cityscapes with the same config you have share, but I m > >  getting this image as a result, do...
> > > @otamic I have trained semantic synthesis 255 on cityscapes with the same config you have share, but I m > > > >  getting this image...
> @mmash98 > > Could you try a smaller batch size, such as 4? If it can't help, I have no other idea. I think . He didnt train enough....