Mr.Xun

Results 8 comments of Mr.Xun

Please checkout your execute command and config file. This is my `yaml` config file. ```yaml model: base_learning_rate: 2.0e-06 target: ldm.models.diffusion.ddpm.LatentDiffusion params: linear_start: 0.0015 linear_end: 0.0195 num_timesteps_cond: 1 log_every_t: 200 timesteps:...

> Hello, I've checked my execute command and config file, which is the same as yours, but still can't solve this problem. Could you give any more solutions? hi,Give me...

Ref this [code block](https://github.com/CompVis/latent-diffusion/blob/main/scripts/inpaint.py#L76-L87)

From my perspective , the image_size of target named `ldm.models.diffusion.ddpm.LatentDiffusion` is latent's size. it isn't real image size.

hi , Please checkout your `pytorch-lightning`'s version.

You should train a new **autoencoder** to encode the source image `128x128x3` into a latent code `32x32x4`.

> > You should train a new **autoencoder** to encode the source image `128x128x3` into a latent code `32x32x4`. > > Could you tell me how to train? Thanks That...

Your Dataset class is wrong. It should contain the key `caption` in `batch` dict.