hed
hed copied to clipboard
Weird boundaries and fixed output size
Hi,
I am running your provided model on arbitrary images. However I get a weird boundaries on top and on the left side. I could obviously just crop it out, but the errors seem to propagate to lower levels of resolution:
Do you know how to fix this problem? Furthermore is it possible to change the output size of a network without retraining? I realized that changing the input size of the image in the prototxt file does not change anything.
net.blobs['data'].reshape(1, 3,200,200) net.reshape() ^reshape a net (changes the input)
also look up caffe net surgery for changing the output
Thanks for your reply. The code I use is basically the tutorial. Both your suggestion and resizing the image in Python did the job of changing the size. But even if I use that, it does not remove the image boundary. In fact I have realized that the image is displaced by exactly 32 pixels to the bottom and right and then cropped, regardless of the image size.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import matplotlib.cm as cm
import scipy.misc
import Image
import scipy.io
import os
# Make sure that caffe is on the python path:
caffe_root = '../../' # this file is expected to be in {caffe_root}/examples/hed/
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
im_lst = []
im = Image.open('2008_000052.jpg')
im = im.resize((200, 200), Image.ANTIALIAS)
in_ = np.array(im, dtype=np.float32)
in_ = in_[:,:,::-1]
in_ -= np.array((104.00698793,116.66876762,122.67891434))
im_lst.append(in_)
idx = 0
gpu = 3
in_ = im_lst[idx]
in_ = in_.transpose((2,0,1))
#remove the following two lines if testing with cpu
caffe.set_mode_gpu()
caffe.set_device(gpu)
# load net
model_root = 'examples/hed/'
net = caffe.Net(model_root+'deploy.prototxt', model_root+'hed_pretrained_bsds.caffemodel', caffe.TEST)
# shape for input (data blob is N x C x H x W), set data
#net.blobs['data'].reshape(1, *in_.shape)
net.blobs['data'].reshape(1, 3,200,200)
net.reshape()
net.blobs['data'].data[...] = in_
# run net and take argmax for prediction
net.forward()
out1 = net.blobs['sigmoid-dsn1'].data[0][0,:,:]
out2 = net.blobs['sigmoid-dsn2'].data[0][0,:,:]
out3 = net.blobs['sigmoid-dsn3'].data[0][0,:,:]
out4 = net.blobs['sigmoid-dsn4'].data[0][0,:,:]
out5 = net.blobs['sigmoid-dsn5'].data[0][0,:,:]
fuse = net.blobs['sigmoid-fuse'].data[0][0,:,:]
import scipy.misc
scipy.misc.imsave('out1.jpg', out1)
scipy.misc.imsave('out2.jpg', out2)
scipy.misc.imsave('out3.jpg', out3)
scipy.misc.imsave('out4.jpg', out4)
scipy.misc.imsave('out5.jpg', out5)
scipy.misc.imsave('fuse.jpg', fuse)
I had the same problem when running the code.
I have the exactlly same problem.
I also have the same problem. The output edge map is not aligned to the input image.
I also meet the same problem
Same problem.
Just find a solution:
- make sure the input image is 500 * 500
- add the following param to the crop layers in the deploy.prototxt
crop_param { axis: 2 offset: 32 offset: 32 }
thanks to this post https://medium.com/@s1ddok/holistically-nested-edge-detection-on-ios-with-coreml-and-swift-e45df264cf66
I meet the same problem, can u provide some solutions?