DCPDN icon indicating copy to clipboard operation
DCPDN copied to clipboard

demo error

Open CDElite opened this issue 6 years ago • 36 comments

你好,我试了一下demo,就是 python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth
但是会出错。 Random Seed: 3661 /usr/local/lib/python2.7/dist-packages/torchvision-0.2.1-py2.7.egg/torchvision/transforms/transforms.py:191: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. Traceback (most recent call last): File "demo.py", line 128, in netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 537, in init self.tran_est=G(input_nc=3,output_nc=3, nf=64) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 88, in init layer2 = blockUNet(nf, nf*2, name, transposed=False, bn=True, relu=False, dropout=False) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 56, in blockUNet block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) File "/usr/local/lib/python2.7/dist-packages/torch-0.4.0-py2.7-linux-x86_64.egg/torch/nn/modules/module.py", line 169, in add_module raise KeyError("module name can't contain "."") KeyError: 'module name can't contain "."' 请教一下是什么原因?

CDElite avatar May 31 '18 03:05 CDElite

Hi Please install pytorch 0.3.1. https://pytorch.org/previous-versions/

hezhangsprinter avatar Jun 01 '18 22:06 hezhangsprinter

so am I

mod1998 avatar Jul 22 '18 01:07 mod1998

have you settle down the problem?

mod1998 avatar Jul 22 '18 01:07 mod1998

Some people suggest the following code. It may address the issue.

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)

model_dict = netG.state_dict() tmpname={} i=0 for k, v in model_dict.items(): tmpname[i]=k i=i+1 i=0 if opt.netG != '': state_dict=torch.load(opt.netG) from collections import OrderedDict new_state_dict = OrderedDict()
for k, v in state_dict.items(): name = tmpname[i] # update key i=i+1

new_state_dict[name] =v 

netG.load_state_dict(new_state_dict) print(netG)

hezhangsprinter avatar Jul 22 '18 18:07 hezhangsprinter

Hi, if someone else need this piece of code, here's the correct indentation for python. The issue is related to the new pytorch version (4.0) wich changed the rules for names in nn.Module, they have migrated also the models from torchvision.models for that reason the demo doesn't work. Source: https://pytorch.org/2018/04/22/0_4_0-migration-guide.html

model_dict = netG.state_dict()
tmpname = {}
i=0
for k, v in model_dict.items():
    tmpname[i]=k
    i=i+1

i=0
if opt.netG != '':
    state_dict=torch.load(opt.netG)
    from collections import OrderedDict
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        name = tmpname[i] # update key
        i=i+1
        new_state_dict[name] =v

    netG.load_state_dict(new_state_dict) 

Thanks @hezhangsprinter

Aleberello avatar Sep 14 '18 13:09 Aleberello

Thanks !!@Aleberello

hezhangsprinter avatar Sep 14 '18 13:09 hezhangsprinter

Thanks you @Aleberello and @hezhangsprinter

monkiq avatar Sep 14 '18 17:09 monkiq

你好,问下楼主解决了这个问题吗?我按照作者给出的方法还是报这个错误。。

SherlockSunset avatar Oct 04 '18 12:10 SherlockSunset

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) 报错从这开始,上述代码放在这里没用的

Gavin666Github avatar Nov 05 '18 06:11 Gavin666Github

亲测,加上代码后可以解决

Tangyuny avatar Dec 19 '18 08:12 Tangyuny

那段代码加哪里??就像Gavin666Github说的,在后边加没用。 目前我只能把所有出现.的地方换成_,也能运行,不过作者说的是什么意思,谁能解答一下

noobgrow avatar Dec 26 '18 04:12 noobgrow

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

QingyuGuo avatar Apr 07 '19 08:04 QingyuGuo

亲测,加上代码后可以解决

请问就是在 netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) 这行之后把后面的改了吗?但是前面的netG还是报错啊?

QingyuGuo avatar Apr 07 '19 09:04 QingyuGuo

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

ZhuanShan avatar May 05 '19 03:05 ZhuanShan

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

QingyuGuo avatar May 05 '19 03:05 QingyuGuo

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

what's your vision of torch and torchvision? i loaded successfully with torch0.3.1 and torchvision0.1.8

ZhuanShan avatar May 06 '19 06:05 ZhuanShan

Hi! I also have this problem. I add the code like above, but the error was the same. How can I schedule it?

yuchenlichuck avatar May 23 '19 06:05 yuchenlichuck

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

what's your vision of torch and torchvision? i loaded successfully with torch0.3.1 and torchvision0.1.8

After following what you suggested, I came across the new problem. image have you encountered this problem?could you give me some advice? thanks in advance.

just-blank avatar Aug 21 '19 14:08 just-blank

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

Hi, have you solved this problem?

xf-zh avatar Sep 12 '19 07:09 xf-zh

亲测,加上代码后可以解决

您好,这段代码应该加在哪里

Alisaxing avatar Nov 17 '19 13:11 Alisaxing

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

必须要用pytorch0.3.0的,同时torchvision的版本也不能超过0.4,安装的话只能通过链接下载或者源码安装

ghost avatar Dec 10 '19 09:12 ghost

i meet the problem,and solved.share my method. 1.use @Aleberello code add netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) raw behind 2.modify dehaze22.py change all add_module ("%s.") to add_module("%s_") for example

block.add_module('%s.relu' % name, nn.ReLU(inplace=True))
block.add_module('%s_relu' % name, nn.ReLU(inplace=True))

blackAndrechen avatar Feb 07 '20 15:02 blackAndrechen

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here

Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model.

ghost avatar Feb 17 '20 21:02 ghost

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here

Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model. thank u .I have meet this problem,and try the method you propose,It can sovle the prmblem RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: But it comes new problem RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach(). how can I sovle it ?there is one solution is to chang data.resize_as_ to resize_ ,but seems not work

yinxuping avatar Feb 29 '20 04:02 yinxuping

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model. thank u .I have meet this problem,and try the method you propose,It can sovle the prmblem RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: But it comes new problem RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach(). how can I sovle it ?there is one solution is to chang data.resize_as_ to resize_ ,but seems not work

I have solve the new question through changing data.resize_as_ to resize_as_ then I find my GPU is not enough ,only 6G but it needs 20G...Can anyone tell me how to change GPU to CPU ,thanks alot

yinxuping avatar Feb 29 '20 05:02 yinxuping

@yinxuping

You just need to add one more parameter to the torch.load() function.

If map_location is a callable, it will be called once for each serialized storage with two arguments: storage and location. The storage argument will be the initial deserialization of the storage, residing on the CPU. Each serialized storage has a location tag associated with it which identifies the device it was saved from, and this tag is the second argument passed to map_location. The builtin location tags are 'cpu' for CPU tensors and 'cuda:device_id' (e.g. 'cuda:2') for CUDA tensors. map_location should return either None or a storage. If map_location returns a storage, it will be used as the final deserialized object, already moved to the right device. Otherwise, torch.load() will fall back to the default behavior, as if map_location wasn’t specified.

For example, use torch.load('netG.pth', map_location=torch.device('cpu'))

Good luck.

ghost avatar Mar 12 '20 14:03 ghost

@yinxuping I do not understand why the model could be 20G? Is there anything wrong with your model?

ghost avatar Mar 12 '20 14:03 ghost

@acoder-fin your link to the updated model seems broken. Can you re-upload it, please?

hamddan4 avatar Mar 20 '20 15:03 hamddan4

@hamddan4 https://drive.google.com/file/d/111m-y0jO_8iU9F3hIE4nDDy-rCNvFDS2/view?usp=sharing Here is the new link.

ghost avatar Mar 25 '20 08:03 ghost

@acoder-fin it can open

shuowoshishui avatar Jun 02 '20 16:06 shuowoshishui