deep-high-resolution-net.pytorch icon indicating copy to clipboard operation
deep-high-resolution-net.pytorch copied to clipboard

about data

Open mrzhangzizhen123 opened this issue 5 years ago • 9 comments

Can this code use other data?For example, medical image, please give some specific Suggestions?thank you.

mrzhangzizhen123 avatar Jun 12 '19 01:06 mrzhangzizhen123

I have same question ,have you solved it ? thanks !

frankite avatar Jul 15 '19 14:07 frankite

I have the same question. Please share how did you get keypoints on your own data.

gireek avatar Jul 22 '19 03:07 gireek

@mrzhangzizhen123 , I have same question..have you solved it?

chaurasiat avatar Jul 22 '19 04:07 chaurasiat

Hi, I had participated a ICCV2019 workshop&challenge and used this code for the tiger pose estimation task. After making some minor modifications to the original code, I got the 2nd place in the final leaderboard. The modified code has been publiced and I hope this will help you apply the HRNet code to your own data.

wanghao14 avatar Aug 12 '19 05:08 wanghao14

Big congratulations to wanghao14!

welleast avatar Aug 12 '19 23:08 welleast

@welleast Thanks for you encouragement. The result depends entirey on the robustness and state-of-the-art performance of your great work for pose estimation.

wanghao14 avatar Aug 13 '19 05:08 wanghao14

How I can train HRNet with my own dataset as follow. Maybe some steps is not clear enough, but I think you can reference it. I will be pleased if I can help

  1. Change your own dataset format into the COCO's, and you need to get bbox of every human in your images.
  2. "mkdir" ./data/xxx/annotaions, /data/xxx/iamges, /data/xxx/person_detections_results and put your data into these "dir"s like ./data/coco
  3. Copy ./lib/dataset/coco.py to ./lib/dataset/xxx.py.
  4. Modify ./lib.dataset/xxx.py: def image_path_from_index(self,index) according to your format of images.
  5. Copy ./experiments/coco to ./experiments/xxx.
  6. For example, modify ./experiments/xxx/hrnet/w32_256x192_adam_lr1e-3.yaml. DATASET.DATASET:'xxx', DATASET.RROT:'./data/xxx', DATASET.TEST_SET:'val' (if you need) DATASET.TRAIN_SET:'train' (if you need) TEST.COCO_BBOX_FILE:'./data/xxx/person_detections_results/xxx_detections_person.json'.

ZP-Guo avatar Nov 05 '19 02:11 ZP-Guo

What tool did you use to create your own dataset?

Gokulnath31 avatar Sep 01 '20 13:09 Gokulnath31

How I can train HRNet with my own dataset as follow. Maybe some steps is not clear enough, but I think you can reference it. I will be pleased if I can help

  1. Change your own dataset format into the COCO's, and you need to get bbox of every human in your images.
  2. "mkdir" ./data/xxx/annotaions, /data/xxx/iamges, /data/xxx/person_detections_results and put your data into these "dir"s like ./data/coco
  3. Copy ./lib/dataset/coco.py to ./lib/dataset/xxx.py.
  4. Modify ./lib.dataset/xxx.py: def image_path_from_index(self,index) according to your format of images.
  5. Copy ./experiments/coco to ./experiments/xxx.
  6. For example, modify ./experiments/xxx/hrnet/w32_256x192_adam_lr1e-3.yaml. DATASET.DATASET:'xxx', DATASET.RROT:'./data/xxx', DATASET.TEST_SET:'val' (if you need) DATASET.TRAIN_SET:'train' (if you need) TEST.COCO_BBOX_FILE:'./data/xxx/person_detections_results/xxx_detections_person.json'.

What tool did you use to create your own dataset?

Gokulnath31 avatar Sep 01 '20 14:09 Gokulnath31