A2J-Transformer icon indicating copy to clipboard operation
A2J-Transformer copied to clipboard

[CVPR 2023] Code for paper 'A2J-Transformer: Anchor-to-Joint Transformer Network for 3D Interacting Hand Pose Estimation from a Single RGB Image'

Results 18 A2J-Transformer issues
Sort by recently updated
recently updated
newest added

I changed cur_dir path, but when i run sh make.sh (win64) Terminal display: ValueError: path '/data/data2/a2jformer/camera_ready/dab_deformable_detr/ops/src/vision.cpp' cannot be absolute, How can i solve this problem, Thanks

您好,我想请问一下如何通过你这个模型将检测到的3D关键点坐标保存下来呢.

您好,在Hand2017数据集上的训练代码中提到的这三个文件我没有在文件夹中找到,这些文件您可以提供一个获取方式吗 keypointsfile = '/data/data1/zhangboshen/CODE/219_A2J_original/Anchor_Pose_fpn/data/Hands2017/train_keypointsUVD.mat' center_train = scio.loadmat('/data/data1/zhangboshen/CODE/219_A2J_original/Anchor_Pose_fpn/data/Hands2017/train_centre_pixel.mat') center_test = scio.loadmat('/data/data1/zhangboshen/CODE/219_A2J_original/Anchor_Pose_fpn/data/Hands2017/test_centre_pixel.mat')

Hello, I want to visualize the hand skeleton in the output picture, how should I do it? Are you using your vis_keypoints(img, kps, kps_gt, bbox, score, skeleton, filename, score_thr=0.4, line_width=3,...

Could you please provide a detailed description of the GPU used and training time? I am trying to reproduce your results. I set the batch_size to 24 and used dual...

hello, thanks for your great work when i trying to run code on my computer, it always stops at 'get bbox and root depth from groundtruth annotation 40%' i set...

HI I'm student who studying hand pose estimation these days. First of all, thanks a lot for your good project and paper. However, I have a question. I want to...

When I compile the make.sh script, I get an error "ModuleNotFoundError: No module named 'torch' ", but my virtual environment has already installed pytorch. Why is there such an error?

Thanks for your great work! Will you provide visualization demo?