E2FGVI
E2FGVI copied to clipboard
Can replace ModulatedDeformConv2dFunction with another function? I'm having trouble converting to another format to save as torch.jit.save.
Looks like it's caused by not being able to export ModulatedDeformConv2dFunction
Traceback (most recent call last):
File "test6.py", line 370, in <module>
main_worker()
File "test6.py", line 294, in main_worker
traced_model.save("traced_model3.pt")
File "/Users/mac/opt/anaconda3/envs/e2fgvi36/lib/python3.6/site-packages/torch/jit/_script.py", line 487, in save
return self._c.save(*args, **kwargs)
RuntimeError:
Could not export Python function call 'ModulatedDeformConv2dFunction'. Remove calls to Python functions before export. Did you forget to add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
Save code add like this place,
...
with torch.no_grad():
masked_imgs = selected_imgs * (1 - selected_masks)
mod_size_h = 60
mod_size_w = 108
h_pad = (mod_size_h - h % mod_size_h) % mod_size_h
w_pad = (mod_size_w - w % mod_size_w) % mod_size_w
masked_imgs = torch.cat(
[masked_imgs, torch.flip(masked_imgs, [3])],
3)[:, :, :, :h + h_pad, :]
masked_imgs = torch.cat(
[masked_imgs, torch.flip(masked_imgs, [4])],
4)[:, :, :, :, :w + w_pad]
ids=torch.randint(10,(1,))
print(ids.shape)
ids[0] =len(neighbor_ids)
print(ids.item())
# Jit.trace seems all values with tensor. so change the second integer with tensor. And inpaint forward, use
# l_t = num_local_frames.item()
traced_model = torch.jit.trace(model, (masked_imgs,ids) )
#torch.save(traced_model,"traced_model2.pt")
traced_model.save("traced_model3.pt")
exit()
Sorry for the late reply. You could remove ModulatedDeformConv2dFunction from model/modules/feat_prop.py and re-train a new model without deformable alignment. But there is a little cost in inpainting performance.
@bigmindapp I had the same problem and was about to retrain, did you solve the problem?
@980202006 @bigmindapp I had the same problem. Could you please share how you guys handled this issue?
I had the same problem. Could you please share how you guys handled this issue?
@bigmindapp I had the same problem and was about to retrain, did you solve the problem?
Have you re -trained the results?