U-2-Net icon indicating copy to clipboard operation
U-2-Net copied to clipboard

how to adjust selection

Open schwarzwals opened this issue 2 years ago • 3 comments

first of all, you guys did an amazing job ! i'm shocked about the accuracy of this U2NET ! I would like to know if i can adjust the selection or somehow fine tune it so it doesn't bite into the person in my situation(see the picture attached) Screen Shot 2022-04-04 at 22 48 50

` # processing image = transform.resize(img, (320, 320), mode='constant')

tmpImg = np.zeros((image.shape[0], image.shape[1], 3))

tmpImg[:, :, 0] = (image[:, :, 0]-0.485)/0.229
tmpImg[:, :, 1] = (image[:, :, 1]-0.456)/0.224
tmpImg[:, :, 2] = (image[:, :, 2]-0.406)/0.225

tmpImg = tmpImg.transpose((2, 0, 1))
tmpImg = np.expand_dims(tmpImg, 0)
image = torch.from_numpy(tmpImg)

image = image.type(torch.FloatTensor)
image = Variable(image)

d1, d2, d3, d4, d5, d6, d7 = net(image)
pred = d1[:, 0, :, :]
ma = torch.max(pred)
mi = torch.min(pred)
dn = (pred-mi)/(ma-mi)
pred = dn`

schwarzwals avatar Apr 04 '22 20:04 schwarzwals

you could try two options: (1) add a post-process like cascadePSP ( https://github.com/hkchengrex/CascadePSP), but it may cost a bit more time. (2) add a matting process like rmbg did https://github.com/danielgatis/rembg

On Mon, Apr 4, 2022 at 2:52 PM schwarzwals @.***> wrote:

first of all, you guys did an amazing job ! i'm shocked about the accuracy of this U2NET ! I would like to know if i can adjust the selection or somehow fine tune it so it doesn't bite into the person in my situation(see the picture attached) [image: Screen Shot 2022-04-04 at 22 48 50] https://user-images.githubusercontent.com/46636486/161630001-3ee7778d-39c1-44fb-9372-c8b30a5e18ca.png

`# processing image = transform.resize(img, (320, 320), mode='constant')

tmpImg = np.zeros((image.shape[0], image.shape[1], 3))

tmpImg[:, :, 0] = (image[:, :, 0]-0.485)/0.229 tmpImg[:, :, 1] = (image[:, :, 1]-0.456)/0.224 tmpImg[:, :, 2] = (image[:, :, 2]-0.406)/0.225

tmpImg = tmpImg.transpose((2, 0, 1)) tmpImg = np.expand_dims(tmpImg, 0) image = torch.from_numpy(tmpImg)

image = image.type(torch.FloatTensor) image = Variable(image)

d1, d2, d3, d4, d5, d6, d7 = net(image) pred = d1[:, 0, :, :] ma = torch.max(pred) mi = torch.min(pred) dn = (pred-mi)/(ma-mi) pred = dn`

— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/U-2-Net/issues/300, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORPRKEG7K37LT27ZFN3VDNJA3ANCNFSM5SQW256Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- Xuebin Qin PhD Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage: https://xuebinqin.github.io/

xuebinqin avatar Apr 04 '22 22:04 xuebinqin

so there's no way to leave more space around the person ? we talk about few pixels... also...if I need to generate only a mask...would that speed up the process ? and how am I doing ? a short example would help. Thank you very much again !

schwarzwals avatar Apr 06 '22 12:04 schwarzwals

To leave more space, you could try "dilation" operations. The dilation size and times can be easily configured. To speed up the process, you can convert the model to ONNX, tensorRT. Generatting only mask won't help that much to reduce the time costs.

On Wed, Apr 6, 2022 at 6:39 AM schwarzwals @.***> wrote:

so there's no way to leave more space around the person ? we talk about few pixels... also...if I need to generate only a mask...would that speed up the process ? and how am I doing ? a short example would help. Thank you very much again !

— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/U-2-Net/issues/300#issuecomment-1090220552, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORIZUGD655YX6SKJGKLVDWAXZANCNFSM5SQW256Q . You are receiving this because you commented.Message ID: @.***>

-- Xuebin Qin PhD Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage: https://xuebinqin.github.io/

xuebinqin avatar Apr 06 '22 21:04 xuebinqin