Grayscale conversion
Is there a straightforward way to adapt the model to grayscales (one channel) images?
Hello,
you can certainly adapt the model to take more or less than 3 image channels as input. I just added a few lines to model/networks.py to make this easier. A generator that takes one-channel images, can be created like this:
# one image channel + mask channel + "ones" channel -> 3 input channels
generator = Generator(cnum_in=3, cnum_out=1)
I also added a few lines to utils/data.py, such that images are converted to greyscale in training, if the number of channels in img_shapes is set to 1 in a config.yaml file.
img_shapes: [256, 256, 1]
Hi!
Thanks a lot for your help!
I may have other questions, should I open other issues?
Hi, if you have other questions you can write them here.
Hello!
Thanks once again for your help.
I would like to ask: what should I expect the discriminator loss to behave in a successful training? So far, it seems it always sticks to 1. Do you have any ideas why is this the case?
G