edge-connect
edge-connect copied to clipboard
question about my own images
Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images
@huangyuyu0426
Basically, you can create the masks in an image editing software if you want them to be precise i.e. removing an object from an image. In which you have to make sure that the mask covers the entire object in the image. We use Adobe Photoshop for that matter.
If you only need to test the model with some random mask, then you can either create them in python or use the referenced datasets. But please note that we use binary masks in our model. That means you need to save mask images in .png
format as I have explained here: https://github.com/knazeri/edge-connect/issues/37#issuecomment-458182472
Thanks a lot.I follow your advice.However,there is another question.
mask
error
I cannot understand it.
I find that your test images' shape are(256L,256L,3L),so I have changed my own images' shape as the same as yours.But there is another problem
Traceback (most recent call last):
File "test.py", line 2, in
Could you tell me how can I solve this problem or how can I make my own images and masks
.
@huangyuyu0426 You mask image needs to be a binary image. Your mask image has four channels instead of one! You can test it using:
from scipy.misc import imread
img = imread('img.png')
print(img.shape) # (218, 178, 4)
I might change the code so that if an image is not a binary mask we get either a meaningful error or just load the first channel! Right now, you should save your masks in a binary format!
@knazeri I don't change my mask images.I use one my mask image to replace your mask images(./examples/celeba/masks).The code run successfully.But the input I ues my own image to replace your image(./examples/celeba/images).The code remind the same problem.
I think my mask images can work well.But I don't know the problem of my images.
This is my image.
I erased an area in the image with an eraser.Could you tell me how can you make this image.
I find that your test images' shape are(256L,256L,3L),so I have changed my own images' shape as the same as yours.But there is another problem
Traceback (most recent call last): File "test.py", line 2, in main(mode=2) File "/home/huangyuyu/edge-connect/main.py", line 61, in main model.test() File "/home/huangyuyu/edge-connect/src/edge_connect.py", line 339, in test outputs = self.inpaint_model(images, edges, masks) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/models.py", line 255, in forward outputs = self.generator(inputs) # in: [rgb(3) + edge(1)] File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/huangyuyu/edge-connect/src/networks.py", line 81, in forward x = self.encoder(x) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/shentao/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 262, 262] to have 4 channels, but got 5 channels instead
Could you tell me how can I solve this problem or how can I make my own images and masks
.
@huangyuyu0426 I'm surprised to see it can even work on Python2.7! Today when I tried my own mask, my own images, continue training with models pre-trained by author, I had an error, which I think is because of my Python3.6.0 after I googled the error. @knazeri Could you tell me the exact version that you tested? Now I know that Python3.7 would not work, and it is not possible for me to install Python3.6.1 or 3.6.5 on the server of my institute.
File "train.py", line 1, in
from main import main File "/auto/rcf-proj3/yaqiong//edge-connect-master/main.py", line 2, in import cv2 ImportError: No module named 'cv2' srun: error: hpc3821: task 0: Exited with exit code 1
@Yaqiongchai We were using Python 3.5 and 3.6! I'm not sure why Python 3.7 might result in an error!
Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images
Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?
Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images
Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?
You can either read the code, or read the paper.
Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images
Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?
You can either read the code, or read the paper.
Thank you, I just got into contact with the image restoration of deep learning. A lot of code doesn't quite understand. Is the raw image and mask we entered during training?
hi,i met this problems too!@knazeri my image and mask_image are below:


when i use test function,the error occurs:
(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/
./checkpoint/places2/config.yml
/home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
self._dict = yaml.load(self._yaml)
成功读取yml文件
Loading EdgeModel generator...
Loading InpaintingModel generator...
start testing...
Traceback (most recent call last):
File "test.py", line 2, in <module>
main(mode=2)
File "/home/longmao/workspace/edge-connect/main.py", line 61, in main
model.test()
File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test
outputs = self.inpaint_model(images, edges, masks)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/longmao/workspace/edge-connect/src/models.py", line 254, in forward
inputs = torch.cat((images_masked, edges), dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 400 and 402 in dimension 2 at /opt/conda/conda-bld/pytorch_1549636813070/work/aten/src/THC/generic/THCTensorMath.cu:83
image and mask_image are all (402,669,3)shape and png format,any ideas to solve this?
@lfxx A quick solution to your problem is to resize the input image to say (404, 700, 3)
before passing it to the network. This is a known problem in our model as you can see here: https://github.com/knazeri/edge-connect/issues/4
Our proposed network architecture is using two convolution kernels for downsampling and two transposed convolution for upsampling. Downsampling operator is implemented in PyTorch such that it when the input is not divisible by the stride, it takes the greatest integer less than or equal to the layer output. When combined with the upsampling operator, the sizes do not match! For example:
downsample:
402 / 2 = 201
201 / 2 = 100.5 => 100
upsample:
100 * 2 = 200
200 * 2 = 400
Now 400 != 402
and you get error! To make sure you don't receive that error, make the input image divisible by 4!
We'll fix this problem soon as a pre-processing step by resizing the input tensors if the sizes do not match!
hi,@knazeri,thanks for your answer! i have resized the input image as you said.now the image and mask_image are all(404,700,3)shape and png format.But another error occurs as below:
(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/
./checkpoint/places2/config.yml
/home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
self._dict = yaml.load(self._yaml)
成功读取yml文件
Loading EdgeModel generator...
Loading InpaintingModel generator...
start testing...
Traceback (most recent call last):
File "test.py", line 2, in <module>
main(mode=2)
File "/home/longmao/workspace/edge-connect/main.py", line 61, in main
model.test()
File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test
outputs = self.inpaint_model(images, edges, masks)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/longmao/workspace/edge-connect/src/models.py", line 255, in forward
outputs = self.generator(inputs) # in: [rgb(3) + edge(1)]
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/longmao/workspace/edge-connect/src/networks.py", line 81, in forward
x = self.encoder(x)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 410, 706] to have 4 channels, but got 5 channels instead
any suggestions to solve this?
The code i used to make mask_image is as below:
import cv2
image=cv2.imread('/home/longmao/workspace/edge-connect/examples/ownpictures/images/2.png')
image=cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
ret,image=cv2.threshold(image,254,255,cv2.THRESH_BINARY)
cv2.imwrite("/home/longmao/workspace/edge-connect/examples/ownpictures/masks/2.png", image)
@knazeri
@lfxx Your mask is fine, only for some reason your input image is a 4 channel color image instead of 3 channel RGB! You can test it here:
from scipy.misc import imread
img = imread('image.png')
print(img.shape) # prints (402, 669, 4)
Make sure your color image has the shape (w, h, 3)
before passing it to the network.
hi,@knazeri,thanks for your answer! i have resized the input image as you said.now the image and mask_image are all(404,700,3)shape and png format.But another error occurs as below:
(pytorch) longmao@longmao-dl:~/workspace/edge-connect$ python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/ --mask ./examples/ownpictures/masks/ --output ./checkpoint/results/ ./checkpoint/places2/config.yml /home/longmao/workspace/edge-connect/src/config.py:8: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. self._dict = yaml.load(self._yaml) 成功读取yml文件 Loading EdgeModel generator... Loading InpaintingModel generator... start testing... Traceback (most recent call last): File "test.py", line 2, in <module> main(mode=2) File "/home/longmao/workspace/edge-connect/main.py", line 61, in main model.test() File "/home/longmao/workspace/edge-connect/src/edge_connect.py", line 324, in test outputs = self.inpaint_model(images, edges, masks) File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/longmao/workspace/edge-connect/src/models.py", line 255, in forward outputs = self.generator(inputs) # in: [rgb(3) + edge(1)] File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/longmao/workspace/edge-connect/src/networks.py", line 81, in forward x = self.encoder(x) File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/longmao/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 4, 7, 7], expected input[1, 5, 410, 706] to have 4 channels, but got 5 channels instead
any suggestions to solve this?
Hi, I have got the same problem due to modify the input image file type from 'jpg' to 'png' with Adobe PS. Solving it with no changing the file type.
Now the code can run well,but the result with pretrained model is not good.My image is below:
My mask_image is below:
The command line used is below:
python test.py --checkpoints ./checkpoint/places2 --input ./examples/ownpictures/images/6.png --mask ./examples/ownpictures/masks/6.png --output ./examples/ownpictures/results/
Then the result is quite bad:
Any suggestions?@knazeri
@lfxx Your provided mask is not entirely covering the missing region! I expanded the mask by 2 pixels and here are the results:
One thing to note is that the image you are testing is much larger than the training set, and the results might not be as good. Our model works best with images smaller than 512x512
你好,我最近也在跑这个代码。可以加你交流一下吗?我的微信:loveanshen 我的QQ:519838354 我的邮箱:[email protected] 非常期待你百忙中的回复
Hello! I have some questions.When I use your examples, I can achieve the same effect.But when I use my own images.There are some problems.Can you tell me that how can I get the corresponding mask images
Hello, I still don't understand how the original image of the datasets is combined with the external mask dataset to become a picture with mask,can you tell me something?
Hello, I am new to the field of deep learning and I also encountered this problem, how did you solve it?Can you give me some advice, please?
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。