human-pose-estimation.pytorch icon indicating copy to clipboard operation
human-pose-estimation.pytorch copied to clipboard

The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"

Results 103 human-pose-estimation.pytorch issues
Sort by recently updated
recently updated
newest added

https://github.com/microsoft/human-pose-estimation.pytorch#results-on-mpii-val what is the meaning of "[email protected]" at the very right of the table

Hello, I used the network model in this article to estimate the gaze point, but the loss function is already very small during the training process, but there is no...

How can I use this pose estimation model trained on COCO to get keypoint coordinates of my own dataset? In other words, so just use pre-trained model trained on COCO...

Hi, I download the weight xxx.pth.tar from google drive but when I untar it. It said that it is now archive. ![image](https://user-images.githubusercontent.com/60471108/109204805-15dccb00-77a6-11eb-8e61-419228b2da17.png) Any recommendation here? Thanks

论文4.2小节中提到 > It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. 请问你们使用的是posetrack2017的数据集吗,而不是2018的,求解答。

请问大家有人知道target_weight设置成False和True的区别是什么么? 我看代码中设置为Ture的话,就意味着不计算not visible的关节点的预测损失,但感觉应该不是这个意思 希望有人可以帮助我解决一下这个困惑,谢谢!!!

Hi, I have encountered with this error when trying to make the libs: error: command '/usr/bin/nvcc' failed with exit status 1 I guess it has something to do with CUDA,...

Hi, thanks for your code! But I have a question about flip test. I don't know why here you do a shift on output_flipped. The comments says that "feature is...

I noticed these in the code: ``` resnet_spec = {18: (BasicBlock, [2, 2, 2, 2]), 34: (BasicBlock, [3, 4, 6, 3]), 50: (Bottleneck, [3, 4, 6, 3]), 101: (Bottleneck, [3,...