yingfei liu

Results 58 comments of yingfei liu

Hi, We train the PETR with 8 gpus, so the total batchsize is 8. (1) If you want to train PETR with a single GPU, you can set "samples_per_gpu=8," and...

> Hi, > > I also have a question about how I can reproduce your results. I trained PETR_vovnet_gridmask_p4_800x320. I used 4 A100 to train. And I saw my results...

Hi, We use swin as backbone to compare with bevdet and the resolution is 768x2112 as same as bevdet. Since PETR is a transformer detectors, when using transformer-based backbone, the...

Hello, We are conducting experiments on larger backbone. Because of the shortage of computing resources, it has not been trained on res101. We will make up for these experiments later.

Hi, You can use https://github.com/megvii-research/PETR/blob/main/tools/generate_sweep_pkl.py to generate the temporal pkl. You may need to modify https://github.com/megvii-research/PETR/blob/main/tools/generate_sweep_pkl.py#L23 and https://github.com/megvii-research/PETR/blob/main/tools/generate_sweep_pkl.py#L27. For hardware resources, you can refer https://github.com/megvii-research/PETR/issues/41#issuecomment-1212779951 to train on a single...

Hi, In the early stage of our development, we have conducted experiment with D=1(depth = 1.0m). In this case, 3D PE encodes the direction vector of a line. The result...

Hi, https://github.com/megvii-research/PETR/blob/main/projects/configs/petrv2/petrv2_vovnet_gridmask_p4_1600x640_trainval_cbgs.py is the config that used for nuScenes test set. We reorganized the code, and now we have no resources to retrain it. Therefore, we only put config here....

Hi, can you provide config and run command?

Hi, You can send the config and log to my email: [email protected] .

For coordinate transformation, (x, y, d) is used in image space, so we should put W in front.