namepllet
namepllet
We used action split.
For the 21 hand joints, I changed joint order and coordinate system(HO3D -> MANO, opengl -> opencv). For the pose parameters, refer https://github.com/namepllet/HandOccNet/issues/10#issuecomment-1290370078
We apply absolute sum to make channel dimension 1, and show feature map from matpoltlib package.
The correlation map in figure 4 shows correlation between single query(red point) and other keys. So you can choose any query and reshape key dimension to 32x32 to visualize correlation...
Since we refactored our code after paper submission, the feature map may look different. And for the HO3D images, use pretrained model weight for HO3D. (The demo model is not...
Could you capture your train_logs.txt (saved in output/log) and share it ? And did you get the same results with ours by using pretrained model ?
It seems you used GPUs less than 4. We used 4 GPUs, so you should scale learning rate according to your batch size. Please refer https://github.com/namepllet/HandOccNet/issues/9#issuecomment-1217812789
The pose_m parameter in original dataset's annotation file is PCA coefficients. I changed it to axis angle representation.
Here are core lines we used. ``` from manopth.manolayer import ManoLayer manolayer_left = ManoLayer(mano_root=osp.join(cfg.mano_path, 'mano', 'models'), flat_hand_mean=False, use_pca=True, side='left', ncomps=45) manolayer_right = ManoLayer(mano_root=osp.join(cfg.mano_path, 'mano', 'models'), flat_hand_mean=False, use_pca=True, side='right', ncomps=45) mano_pose...
For the original dataset, set flat_hand_mean=True in here https://github.com/namepllet/HandOccNet/blob/65c00af06ad81f568a6ad0ff4919f7f1d6c65e44/common/utils/mano.py#L37