Keqi Chen

Results 11 comments of Keqi Chen

> @keqizero same problem. How did you solve that? No I haven't solved it

> @keqizero I find official YOLOX ends with `.pth`, while ByteTrack ends with `.pth.tar`. Evaluation detection results from official YOLOX and ByteTrack are different. Thus, I am confusing on using...

Hi, have you successfully extracted the video features? I still find it confusing about the configurations of the model, and I failed to load the weights. Could you share me...

> Have you solved it? [@keqizero](https://github.com/keqizero) No, I haven't

Hi, Yes, the training of SelfPose3d only requires 2d pseudo poses, and we only provide the code to generate 2d pseudo poses. I am not sure what you mean by...

> can you give a detail expression about how to train and evaluate this model on my own dataset. The readme for pseudo_2d_labels_generation is not clear for me. To train...

> Thank you for your reply. When I followed the step in pseudo_2d_labels_generation, the s1 generate image_info_train_panoptic.json from a TRAIN_DB_PATH, I want to know how can I generate the TRAIN_DB_PATH...

> Thanks for your reply. When I generate TRAIN_DB_PATH from lib/dataset/_get_db, I found the dataset read a pickle file group_train_cam5_pseudo_hrnet_hard_9videos.pkl for panoptic and pred_campus_maskrcnn_hrnet_coco.pkl for campus dataset. How can I...

Hi, thank you for the subset experiment. Personally, I have never tried training with only one video. To do a quick experiment, I used the pretrained backbone and root net...

> Thank you for your response. > > Our configuration (printed at the start) is exactly the same, beside batch_size / GPU count. Here is my training log: [training-log.txt](https://github.com/user-attachments/files/17211094/training-log.txt) >...