nn_tools icon indicating copy to clipboard operation
nn_tools copied to clipboard

convert alphapose sppe module to caffe error

Open ChenYingpeng opened this issue 5 years ago • 6 comments

Hi, I modify resnet_pytorch_2_caffe.py file.

` import sys sys.path.append('/home/chen/object-detection/pytorch2caffe/') import torch from torch.autograd import Variable from torchvision.models import resnet import pytorch_to_caffe from SPPE.src.models.FastPose import createModel

if __name__=='__main__':
    name='sppe-seresnet101'
    pose_model = createModel()
    pose_model.eval()
    print('Loading pose model from {}'.format('./models/sppe/duc_se.pth'))
    pose_model.load_state_dict(torch.load('./models/sppe/duc_se.pth'))

    input=Variable(torch.ones([1,3,320,256]))

    pytorch_to_caffe.trans_net(pose_model,input,name)
    pytorch_to_caffe.save_prototxt('{}.prototxt'.format(name))
    pytorch_to_caffe.save_caffemodel('{}.caffemodel'.format(name))

`

Error happens when run in terminal.

Loading pose model from ./models/sppe/duc_se.pth Starting Transform, This will take a while 140349235862408:blob1 was added to blobs 140349235862408:blob1 getting conv1 conv1 was added to layers 140349235861976:conv_blob1 was added to blobs 140349235862408:blob1 getting 140349235861976:conv_blob1 getting <Caffe.caffe_net.Caffemodel object at 0x7fa5afc01f98> 140349235861976:conv_blob1 getting batch_norm1 batch_norm1 was added to layers 140349235859816:batch_norm_blob1 was added to blobs bn_scale1 bn_scale1 was added to layers 140349235859816:batch_norm_blob1 getting relu1 relu1 was added to layers 140349235859816:relu_blob1 was added to blobs 140349235859816:relu_blob1 getting max_pool1 max_pool1 was added to layers 140349235862552:max_pool_blob1 was added to blobs 140349235859816:relu_blob1 getting WARNING: the output shape miss match at max_pool1: input torch.Size([1, 64, 160, 128]) output---Pytorch:torch.Size([1, 64, 80, 64])---Caffe:torch.Size([1, 64, 81, 65]) This is caused by the different implementation that ceil mode in caffe and the floor mode in pytorch. You can add the clip layer in caffe prototxt manually if shape mismatch error is caused in caffe. conv2 conv2 was added to layers 140349235862480:conv_blob2 was added to blobs 140349235862552:max_pool_blob1 getting 140349235862480:conv_blob2 getting <Caffe.caffe_net.Caffemodel object at 0x7fa5afc01f98> 140349235862480:conv_blob2 getting batch_norm2 batch_norm2 was added to layers 140349235859528:batch_norm_blob2 was added to blobs bn_scale2 bn_scale2 was added to layers conv3 conv3 was added to layers 140349235862624:conv_blob3 was added to blobs 140349235859528:batch_norm_blob2 getting 140349235862624:conv_blob3 getting <Caffe.caffe_net.Caffemodel object at 0x7fa5afc01f98> 140349235862624:conv_blob3 getting batch_norm3 batch_norm3 was added to layers 140349235862696:batch_norm_blob3 was added to blobs bn_scale3 bn_scale3 was added to layers conv4 conv4 was added to layers 140349235862840:conv_blob4 was added to blobs 140349235862696:batch_norm_blob3 getting 140349235862840:conv_blob4 getting <Caffe.caffe_net.Caffemodel object at 0x7fa5afc01f98> 140349235862840:conv_blob4 getting batch_norm4 batch_norm4 was added to layers 140349235859672:batch_norm_blob4 was added to blobs bn_scale4 bn_scale4 was added to layers ave_pool1 ave_pool1 was added to layers 140349235863056:ave_pool_blob1 was added to blobs 140349235859672:batch_norm_blob4 getting IMPORTANT WARNING: number in item (80, 64) is not the same,try hieht and wight spilt up view1 view1 was added to layers view1 140349235863344:view_blob1 was added to blobs 140349235863056:ave_pool_blob1 getting fc1 fc1 was added to layers 140349235863128:fc_blob1 was added to blobs 140349235863344:view_blob1 getting 140349235863128:fc_blob1 getting relu2 relu2 was added to layers 140349235863128:relu_blob2 was added to blobs 140349235863128:relu_blob2 getting fc2 fc2 was added to layers 140349235862912:fc_blob2 was added to blobs 140349235863128:relu_blob2 getting view2 view2 was added to layers view2 140349235863272:view_blob2 was added to blobs Traceback (most recent call last): File "sppe_pytorch_2_caffe.py", line 19, in <module> pytorch_to_caffe.trans_net(pose_model,input,name) File "/home/chen/object-detection/pytorch2caffe/pytorch_to_caffe.py", line 448, in trans_net out = net.forward(input_var) File "/home/chen/object-detection/pytorch2caffe/example/SPPE/src/models/FastPose.py", line 29, in forward out = self.preact(x) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/object-detection/pytorch2caffe/example/SPPE/src/models/layers/SE_Resnet.py", line 72, in forward x = self.layer1(x) # 256 * h/4 * w/4 File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/object-detection/pytorch2caffe/example/SPPE/src/models/layers/SE_Resnet.py", line 34, in forward out = self.se(out) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/object-detection/pytorch2caffe/example/SPPE/src/models/layers/SE_module.py", line 27, in forward y = y.view(b, c, 1, 1) File "/home/chen/object-detection/pytorch2caffe/pytorch_to_caffe.py", line 290, in _view bottom=[log.blobs(input)],top=top_blobs) File "/home/chen/object-detection/pytorch2caffe/pytorch_to_caffe.py", line 75, in blobs print("{}:{} getting".format(var, self._blobs[var])) KeyError: 140349235863416

Look forward to your reply. Thanks for you in advance.

ChenYingpeng avatar Mar 20 '19 01:03 ChenYingpeng

Hi, Yingpeng Some layer or operation is not implemented in your network which could result in this kind of KeyError. Can you provide your Network structure to check which operation is not implemented in nn_tools?

hahnyuan avatar Mar 21 '19 04:03 hahnyuan

@hahnyuan This is my pytorch model link:https://pan.baidu.com/s/1ENaGqag--zT56UgR-v3IVA password:fvqn .

ChenYingpeng avatar Mar 21 '19 07:03 ChenYingpeng

Hi, Are you using https://github.com/Amanbhandula/AlphaPose ? The PiexelShuffle operation is not supported.

hahnyuan avatar Mar 22 '19 02:03 hahnyuan

Yes,I use this link. I have rewrite pixel_shuffle_layer in caffe. Pixel shuffle layer has not weights and i want get other layers weights. Could I skip this layer and only acheive other layer weights?

ChenYingpeng avatar Mar 22 '19 02:03 ChenYingpeng

OK, I will add a feature of nn_tools. It will produce a placeholder in caffe prototxt when the operation is not supported.

hahnyuan avatar Mar 22 '19 02:03 hahnyuan

Done! The not supporting operations will transferred to a Python layer in Caffe. You can implemented it by yourself.

hahnyuan avatar Mar 22 '19 07:03 hahnyuan