deepfillv2-pytorch icon indicating copy to clipboard operation
deepfillv2-pytorch copied to clipboard

Grayscale conversion

Open glagnese opened this issue 2 years ago • 4 comments

Is there a straightforward way to adapt the model to grayscales (one channel) images?

glagnese avatar Jun 18 '23 14:06 glagnese

Hello, you can certainly adapt the model to take more or less than 3 image channels as input. I just added a few lines to model/networks.py to make this easier. A generator that takes one-channel images, can be created like this:

# one image channel + mask channel + "ones" channel -> 3 input channels
generator = Generator(cnum_in=3, cnum_out=1)

I also added a few lines to utils/data.py, such that images are converted to greyscale in training, if the number of channels in img_shapes is set to 1 in a config.yaml file.

img_shapes: [256, 256, 1]

nipponjo avatar Jun 18 '23 15:06 nipponjo

Hi!

Thanks a lot for your help!

I may have other questions, should I open other issues?

glagnese avatar Jun 19 '23 09:06 glagnese

Hi, if you have other questions you can write them here.

nipponjo avatar Jun 19 '23 19:06 nipponjo

Hello!

Thanks once again for your help.

I would like to ask: what should I expect the discriminator loss to behave in a successful training? So far, it seems it always sticks to 1. Do you have any ideas why is this the case?

G

glagnese avatar Jun 29 '23 07:06 glagnese