Peng Lu

Results 100 comments of Peng Lu

We utilize MMDeploy to export RTMO models and test inference speed via [this script](https://github.com/open-mmlab/mmdeploy/blob/main/tools/profiler.py). You may want to consider using MMDeploy directly.

Our evaluation indicates that ONNXRuntime inference time is 19.1 ms, which is close to your evaluation. You can achieve a higher FPS by using the TensorRT backend.

Hi, could please clarify what you mean by "already processed 3D pose estimation json file"? If you mean dataset annotation files, you can visualize them using [dataset browser](https://mmpose.readthedocs.io/en/latest/user_guides/prepare_datasets.html#browse-dataset).

You can refer to [custom testing](https://mmpose.readthedocs.io/en/latest/user_guides/train_and_test.html#custom-testing-features) to perform keypoint order conversion during testing

It depends on the model you are using. Most top-down methods do not incorporate attention mechanism

1) `sigmas` and `joint_weights` should be set in the metainfo file of the dataset 2) To generate `bbox_file`, please refer to https://github.com/open-mmlab/mmpose/blob/main/tools/misc/generate_bbox_file.py

```python dataset_crowdpose = dict( type='HandDataset', data_root=data_root, data_mode=data_mode, ann_file='train_coco.json', data_prefix=dict(img='images/'), pipeline=val_pipeline, # val_pipeline # train_pipeline_stage1 ) ``` 训练数据集别用val_pipeline

> @Ben-Louis 大佬再请教一个问题。我使用rtmo里面的coco数据集及其coco下面的rtmo-s_8xb32-600e_coco-640x640.py配置文件进行训练,得到的最终mAP才达到0.607? 而与实际的0.677还是有很大的差异的,请问是什么原因? > > 我只是将batch_size由32改为了16(因为电脑配置的问题,32跑不动)。 > > 然后我对比了backbone为s和m,l的区别。我发现就MLECCLoss的权重是1,而不是1e-3, 而且crowdpose的rtmo-s的MLECCLOSS的权重也是1e-3,所以这个是其中一个原因吗? batchsize是 $8 \times 32 = 256$,不是 32。你要复现的话 batchsize 得一致

This parameter only influences the score of each person and determines which instances will be retained in multi-person scenarios. see https://github.com/open-mmlab/mmpose/blob/main/mmpose/evaluation/metrics/coco_metric.py#L453-L456