Shichao Li
Shichao Li
> Hi, Thanks for the great work. I was just wondering if you have the 2D keypoints of HRNet (17 joints) in .npz format just like [Videpose3D](https://github.com/facebookresearch/VideoPose3D/blob/main/DATASETS.md) or [PoseAUG](https://github.com/jfzhang95/PoseAug/blob/main/DATASETS.md)? Hi,...
Link: https://arxiv.org/abs/2011.08464
Will the used datasets in the paper be released? I'd like to reproduce the results in the paper.
> May I ask a question? When estimating the 2D pose from the cropped image, how is the anchor of the cropped image determined Do you mean the bounding box?...
> > > May I ask a question? When estimating the 2D pose from the cropped image, how is the anchor of the cropped image determined > > > >...
Thank you for your interest in this study. I did not convert the H3.6M data format back to COCO. Instead, I used model weights pre-trained on the COCO dataset to...
> 你好,你提供的下载文件(inference的图片和模型)无法下载,可以重新提供一下吗? 你好,https://drive.google.com/file/d/1NjQFCz0GdS7oIdYrK5ouxEI07wYYh4r8/view?pli=1在google drive是可以下载的
> Thank you for kindly providing 2D HRNet model. Please let me raise an issue about reproducing 2D detection results. > > I could not reproduce Average Joint Localization Error...
> Thank you for providing the details. But I've tried it by the code clip below and still have difference: > > ``` > def readFrames(videoFile, destination_dir, sequence_name): > global...
> Thank you for your analysis. I think the timestamp are same. For example, 1002.jpg that this repo provided means the 1002th or 1003th frame, I tried both and it...