MTMT
MTMT copied to clipboard
Getting strange results when using pretrained settings
Hi, I am new to image segmentation. Your method seems very promising but when I try loading your pretrained weights I get strange results. It does not seem to detect the shadows. I've downloaded the iter10000.pth file and load it using
net = build_model() checkpoint = torch.load('iter_10000.pth',map_location=torch.device('cpu')) net.load_state_dict(checkpoint)
then I run
xxx=net(img_var)[0][0] res = torch.sigmoid(xxx)
However, the masked images it is outputting are off. Please compare the ground truth image, input, and my result. It is doing some sort of segmentation, but it is not accurately finding the shadows. I am sure I am loading the wrong model or doing something wrong in the settings. Do you have any pointers to what I am doing wrong? Thanks very much!