raypine
raypine
mpi15 for all activities.
lib/preprocess/create_annot.py
We follow the [work](https://github.com/mks0601/3DMPPE_POSENET_RELEASE/blob/master/data/MuPoTS/mpii_mupots_multiperson_eval.m) of Moon and the official code of MuPoTS-3D dataset.
Sorry for that, I forget we have already provided it. in lib/eval folder
We adopt uniform sampling and the number of samples is described in the paper.
The batch size is calculated in the setting of multi-gpu DistributedDataParallel training.
What is your setting of "nproc_per_node" ?
It may be a problem. An easy solution is to call "train.py" directly rather than using "torch.distributed.launch".
Sorry for that. It is not preserved. However, according to our experience, for Panoptic, to achieve the performance in our paper (even better) is easy, since the distribution of training...
The training epochs we set is relatively large and you can stop early according to the loss curve and performance on validation set you choose.