ypflll
ypflll
I met the same problem when I use torchfile.load() to load the .t7 file. @wangbm @1adrianb Do you konw where is the problem? Thanks.
@zoooo0820 The annotation is 68x2, you need to run 3D-FAN-depth to get depth.
@Kiris-Wu Sorry that I didn't run the full dataset. Seems that pytorch version doesn't provide 2D-to-3D model while this repo does.
@burnmyletters @t-martyniuk Hi, thank you for releasing the model. I found it's hard when I tried to integrate it in my network for training, because the released model is saved...
I found that state_dict can be exported from the released model, so I solved my issue.
@Sh0lim I get a GT using semantic segmentation network like mask-rcnn or mobilenet.
Yes. Trimap can be generated by a rough mask, so I think saliency detection can be used here.
I use mask-rcnn or deeplabv3+.
@ZephirGe You can use Bodynet's code to do this. https://github.com/gulvarol/bodynet/blob/master/fitting/fit_surreal.py
@gaizixuan0128 You can resample smpl vertices from UV map by interpolation. Or you can directly use all valid vertices on the UV map(about 50k) as the fitting input, and this...