edge-connect icon indicating copy to clipboard operation
edge-connect copied to clipboard

question about my own images

Open huangyuyu0426 opened this issue 6 years ago • 21 comments

Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images

huangyuyu0426 avatar Feb 01 '19 07:02 huangyuyu0426

@huangyuyu0426 Basically, you can create the masks in an image editing software if you want them to be precise i.e. removing an object from an image. In which you have to make sure that the mask covers the entire object in the image. We use Adobe Photoshop for that matter. If you only need to test the model with some random mask, then you can either create them in python or use the referenced datasets. But please note that we use binary masks in our model. That means you need to save mask images in .png format as I have explained here: https://github.com/knazeri/edge-connect/issues/37#issuecomment-458182472

knazeri avatar Feb 02 '19 18:02 knazeri

Thanks a lot.I follow your advice.However,there is another question.

image mask image error image I cannot understand it.

huangyuyu0426 avatar Feb 12 '19 06:02 huangyuyu0426

I find that your test images' shape are(256L,256L,3L),so I have changed my own images' shape as the same as yours.But there is another problem

Traceback (most recent call last): File "test.py", line 2, in main(mode=2) File "/home/huangyuyu/edge-connect/main.py", line 61, in main model.test() File "/home/huangyuyu/edge-connect/src/edge_connect.py", line 339, in test outputs = self.inpaint_model(images, edges, masks) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/models.py", line 255, in forward outputs = self.generator(inputs) # in: [rgb(3) + edge(1)] File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/networks.py", line 81, in forward x = self.encoder(x) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 262, 262] to have 4 channels, but got 5 channels instead

Could you tell me how can I solve this problem or how can I make my own images and masks

.

huangyuyu0426 avatar Feb 13 '19 08:02 huangyuyu0426

@huangyuyu0426 You mask image needs to be a binary image. Your mask image has four channels instead of one! You can test it using:

from scipy.misc import imread
img = imread('img.png')
print(img.shape) # (218, 178, 4)

I might change the code so that if an image is not a binary mask we get either a meaningful error or just load the first channel! Right now, you should save your masks in a binary format!

knazeri avatar Feb 13 '19 17:02 knazeri

@knazeri I don't change my mask images.I use one my mask image to replace your mask images(./examples/celeba/masks).The code run successfully.But the input I ues my own image to replace your image(./examples/celeba/images).The code remind the same problem. image

I think my mask images can work well.But I don't know the problem of my images. This is my image. image I erased an area in the image with an eraser.Could you tell me how can you make this image.

huangyuyu0426 avatar Feb 14 '19 03:02 huangyuyu0426

I find that your test images' shape are(256L,256L,3L),so I have changed my own images' shape as the same as yours.But there is another problem

Traceback (most recent call last): File "test.py", line 2, in main(mode=2) File "/home/huangyuyu/edge-connect/main.py", line 61, in main model.test() File "/home/huangyuyu/edge-connect/src/edge_connect.py", line 339, in test outputs = self.inpaint_model(images, edges, masks) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/models.py", line 255, in forward outputs = self.generator(inputs) # in: [rgb(3) + edge(1)] File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/networks.py", line 81, in forward x = self.encoder(x) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 262, 262] to have 4 channels, but got 5 channels instead

Could you tell me how can I solve this problem or how can I make my own images and masks

.

@huangyuyu0426 I'm surprised to see it can even work on Python2.7! Today when I tried my own mask, my own images, continue training with models pre-trained by author, I had an error, which I think is because of my Python3.6.0 after I googled the error. @knazeri Could you tell me the exact version that you tested? Now I know that Python3.7 would not work, and it is not possible for me to install Python3.6.1 or 3.6.5 on the server of my institute.

File "train.py", line 1, in from main import main File "/auto/rcf-proj3/yaqiong//edge-connect-master/main.py", line 2, in import cv2 ImportError: No module named 'cv2' srun: error: hpc3821: task 0: Exited with exit code 1

Yaqiongchai avatar Mar 15 '19 04:03 Yaqiongchai

@Yaqiongchai We were using Python 3.5 and 3.6! I'm not sure why Python 3.7 might result in an error!

knazeri avatar Mar 15 '19 14:03 knazeri

Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images

Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?

wkkkkkx avatar Mar 28 '19 05:03 wkkkkkx

Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images

Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?

You can either read the code, or read the paper.

Yaqiongchai avatar Mar 28 '19 17:03 Yaqiongchai

Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images

Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?

You can either read the code, or read the paper.

Thank you, I just got into contact with the image restoration of deep learning. A lot of code doesn't quite understand. Is the raw image and mask we entered during training?

wkkkkkx avatar Mar 29 '19 11:03 wkkkkkx

hi,i met this problems too!@knazeri my image and mask_image are below:

![guard2](https://user-images.githubusercontent.com/27956643/55275079-c07ab380-531b-11e9-9214-497a17bd0233.png)
![guard2](https://user-images.githubusercontent.com/27956643/55275081-c53f6780-531b-11e9-8040-67592c166eb0.png)
when i use test function,the error occurs:
(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/
./checkpoint/places2/config.yml
/home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  self._dict = yaml.load(self._yaml)
成功读取yml文件
Loading EdgeModel generator...
Loading InpaintingModel generator...

start testing...

Traceback (most recent call last):
  File "test.py", line 2, in <module>
    main(mode=2)
  File "/home/longmao/workspace/edge-connect/main.py", line 61, in main
    model.test()
  File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test
    outputs = self.inpaint_model(images, edges, masks)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/workspace/edge-connect/src/models.py", line 254, in forward
    inputs = torch.cat((images_masked, edges), dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 400 and 402 in dimension 2 at /opt/conda/conda-bld/pytorch_1549636813070/work/aten/src/THC/generic/THCTensorMath.cu:83

image and mask_image are all (402,669,3)shape and png format,any ideas to solve this?

lfxx avatar Mar 30 '19 10:03 lfxx

@lfxx A quick solution to your problem is to resize the input image to say (404, 700, 3) before passing it to the network. This is a known problem in our model as you can see here: https://github.com/knazeri/edge-connect/issues/4 Our proposed network architecture is using two convolution kernels for downsampling and two transposed convolution for upsampling. Downsampling operator is implemented in PyTorch such that it when the input is not divisible by the stride, it takes the greatest integer less than or equal to the layer output. When combined with the upsampling operator, the sizes do not match! For example:

downsample:
402 / 2 = 201
201 / 2 = 100.5 => 100

upsample:
100 * 2 = 200
200 * 2 = 400

Now 400 != 402 and you get error! To make sure you don't receive that error, make the input image divisible by 4! We'll fix this problem soon as a pre-processing step by resizing the input tensors if the sizes do not match!

knazeri avatar Mar 30 '19 14:03 knazeri

hi,@knazeri,thanks for your answer! i have resized the input image as you said.now the image and mask_image are all(404,700,3)shape and png format.But another error occurs as below:

(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/
./checkpoint/places2/config.yml
/home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  self._dict = yaml.load(self._yaml)
成功读取yml文件
Loading EdgeModel generator...
Loading InpaintingModel generator...

start testing...

Traceback (most recent call last):
  File "test.py", line 2, in <module>
    main(mode=2)
  File "/home/longmao/workspace/edge-connect/main.py", line 61, in main
    model.test()
  File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test
    outputs = self.inpaint_model(images, edges, masks)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/workspace/edge-connect/src/models.py", line 255, in forward
    outputs = self.generator(inputs)                                    # in: [rgb(3) + edge(1)]
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/workspace/edge-connect/src/networks.py", line 81, in forward
    x = self.encoder(x)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 410, 706] to have 4 channels, but got 5 channels instead

any suggestions to solve this?

lfxx avatar Apr 03 '19 03:04 lfxx

The code i used to make mask_image is as below:

import cv2
image=cv2.imread('/home/longmao/workspace/edge-connect/examples/ownpictures/images/2.png')
image=cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
ret,image=cv2.threshold(image,254,255,cv2.THRESH_BINARY)
cv2.imwrite("/home/longmao/workspace/edge-connect/examples/ownpictures/masks/2.png", image)

@knazeri

lfxx avatar Apr 03 '19 06:04 lfxx

@lfxx Your mask is fine, only for some reason your input image is a 4 channel color image instead of 3 channel RGB! You can test it here:

from scipy.misc import imread
img = imread('image.png')
print(img.shape)   # prints (402, 669, 4)

Make sure your color image has the shape (w, h, 3) before passing it to the network.

knazeri avatar Apr 03 '19 14:04 knazeri

hi,@knazeri,thanks for your answer! i have resized the input image as you said.now the image and mask_image are all(404,700,3)shape and png format.But another error occurs as below:

(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/
./checkpoint/places2/config.yml
/home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  self._dict = yaml.load(self._yaml)
成功读取yml文件
Loading EdgeModel generator...
Loading InpaintingModel generator...

start testing...

Traceback (most recent call last):
  File "test.py", line 2, in <module>
    main(mode=2)
  File "/home/longmao/workspace/edge-connect/main.py", line 61, in main
    model.test()
  File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test
    outputs = self.inpaint_model(images, edges, masks)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/workspace/edge-connect/src/models.py", line 255, in forward
    outputs = self.generator(inputs)                                    # in: [rgb(3) + edge(1)]
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/workspace/edge-connect/src/networks.py", line 81, in forward
    x = self.encoder(x)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 410, 706] to have 4 channels, but got 5 channels instead

any suggestions to solve this?

Hi, I have got the same problem due to modify the input image file type from 'jpg' to 'png' with Adobe PS. Solving it with no changing the file type.

Toland12 avatar Apr 04 '19 03:04 Toland12

Now the code can run well,but the result with pretrained model is not good.My image is below: 6 My mask_image is below: 6 The command line used is below: python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/6.png --mask ./examples/ownpictures/masks/6.png --output ./examples/ownpictures/results/ Then the result is quite bad: 6 Any suggestions?@knazeri

lfxx avatar Apr 08 '19 09:04 lfxx

@lfxx Your provided mask is not entirely covering the missing region! I expanded the mask by 2 pixels and here are the results: 2 i

One thing to note is that the image you are testing is much larger than the training set, and the results might not be as good. Our model works best with images smaller than 512x512

knazeri avatar Apr 12 '19 02:04 knazeri

你好,我最近也在跑这个代码。可以加你交流一下吗?我的微信:loveanshen 我的QQ:519838354 我的邮箱:[email protected] 非常期待你百忙中的回复

anshen666 avatar Dec 10 '19 09:12 anshen666

Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images

Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?

Hello, I am new to the field of deep learning and I also encountered this problem, how did you solve it?Can you give me some advice, please?

shensongli avatar Dec 28 '22 10:12 shensongli

这是来自QQ邮箱的假期自动回复邮件。   您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。

huangyuyu0426 avatar Dec 28 '22 10:12 huangyuyu0426