A2J icon indicating copy to clipboard operation
A2J copied to clipboard

Code for paper "A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image". ICCV2019

Results 33 A2J issues
Sort by recently updated
recently updated
newest added

I'm appreciating your great work. For my own human pose dataset, in order to train the my own model: 1) How to generate the center and bounding box for each...

Hi @zhangboshen , I am trying to use the A2J model with the ITOP side view dataset. Could you please help me with understanding the following doubts? 1. itop_side_mean.npy ```...

I am trying to use this model to Identify my pictures, if the bndbox is need?

I also found that the effect of using this model in my own pictures was very bad. I was wondering whether it was because my own depth pictures had not...

output are:-python itop_side.py 406it [00:27, 15.02it/s] ('Accuracy:', 0) ('joint_', 0, 'Head', ', accuracy: ', 0) ('joint_', 1, 'Neck', ', accuracy: ', 0) ('joint_', 2, 'RShoulder', ', accuracy: ', 0) ('joint_',...

Thanks for sharing your project. Is there a demo script you can share to test it live with a depth camera?

做成了live版本测试了下,主要几个问题: 1. 抗干扰能力差,画面中有手掌以外的物体(包括手臂)就会容易识别失误 2. 识别错误时手的结构不再是手 3. 手背准确性十分低,哪怕是手掌这样无遮挡的 第一个先把手部分割出来就好,用质心来识别手是粗糙了点, 第二个可以工程性的限制下,或者cyclegan矫正, 但第三个看起来是数据集问题,虽然或许加数据大概可以一定程度上解决,但设备原因重新训练的成本对咱来说还是比较高的, 所以咱想先问问看大大的model有没有用其他数据finetuning过,训练时有没旋转斜切之类的?(代码里没见到) ps: 因为是sr300摄像头所以用的是hands2017的model psp: 不用cuda的话大概0.5 fps的样子,囧 pspp: 对于NYU选的14个关节点表示迷惑

hello, Can you provide the mat file of the detection bounding boxes of the itop side and top training set? look forward to your reply,thanks!

I'm appreciating your great work. when i test own depth data, I can't get the center coordinates and depth values of hand bbox advance. Therefore, when I try to remove...