Frustum-Pointpillars
Frustum-Pointpillars copied to clipboard
When I used the weight file I trained to evaluate, I encountered the following error, how can I solve it?
ubuntu@VM-0-11-ubuntu:~/Frustum-Pointpillars/second$ python ./pytorch/train.py evaluate --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=./model
middle_class_name PointPillarsScatter
Restoring parameters from model/voxelnet-29503.tckpt
remain number of infos: 7518
Generate output labels...
Traceback (most recent call last):
File "./pytorch/train.py", line 771, in
I have the same error when trying to evaluate after training, the contents of input_dict
look like this:
{'P2': array([[7.215377e+02, 0.000000e+00, 6.095593e+02, 4.485728e+01],
[0.000000e+00, 7.215377e+02, 1.728540e+02, 2.163791e-01],
[0.000000e+00, 0.000000e+00, 1.000000e+00, 2.745884e-03],
[0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00]],
dtype=float32),
'Trv2c': array([[ 7.533745e-03, -9.999714e-01, -6.166020e-04, -4.069766e-03],
[ 1.480249e-02, 7.280733e-04, -9.998902e-01, -7.631618e-02],
[ 9.998621e-01, 7.523790e-03, 1.480755e-02, -2.717806e-01],
[ 0.000000e+00, 0.000000e+00, 0.000000e+00, 1.000000e+00]],
dtype=float32),
'image_idx': 0,
'image_path': 'testing/image_2/000000.png',
'image_shape': array([ 375, 1242], dtype=int32),
'points': array([[ 5.5025e+01, 2.5000e-02, 2.0700e+00, 0.0000e+00],
[ 5.4825e+01, 1.9800e-01, 2.0630e+00, 0.0000e+00],
[ 5.4738e+01, 3.6900e-01, 2.0600e+00, 0.0000e+00],
...,
[ 6.4090e+00, -4.2000e-02, -1.6750e+00, 3.1000e-01],
[ 6.4000e+00, -2.2000e-02, -1.6720e+00, 3.4000e-01],
[ 6.4000e+00, -2.0000e-03, -1.6720e+00, 3.1000e-01]],
dtype=float32),
'rect': array([[ 0.9999239 , 0.00983776, -0.00744505, 0. ],
[-0.0098698 , 0.9999421 , -0.00427846, 0. ],
[ 0.00740253, 0.00435161, 0.9999631 , 0. ],
[ 0. , 0. , 0. , 1. ]],
dtype=float32)}
Sorry for the delayed response, please try out the following command for evaluation:
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=trained_model_dir/ --ckpt_path=trained_model_dir/ckpt/voxelnet-324800.tckpt --pickle_result=True --ref_detfile=./rgb_detections/rgb_detection_val.txt
For evaluation of the validation or test dataset, we must provide a 2D reference detections file or a directory that contains 2D detections in the KITTI format. (check the function evaluate() in train.py file)
For the validation dataset, a reference detection file is already provided. This is the same as the one provided by the F-PointNet codebase.
For the test dataset, I have generated 2D detections using Faster R-CNN trained on the EuroCityPersons dataset. I will add those to this repo so that you can evaluate the test set also.
Hey, thanks for your reply. Unfortunately, I still get the same error when using your suggested command.
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=trained_model_dir/ --ckpt_path=trained_model_dir/ckpt/voxelnet-324800.tckpt --pickle_result=True --ref_detfile=./rgb_detections/rgb_detection_val.txt
I got to run the evaluation by downloading reference 3D object detections from the kitti 3D object detection 2017 website (https://s3.eu-central-1.amazonaws.com/avg-kitti/data_object_det_2.zip) and extracted them to the kitti testing folder
├── testing
│ ├── calib
│ ├── det_2
│ │ ├── lsvm4_car_2
│ │ ├── lsvm4_cyclist_2
│ │ └── lsvm4_pedestrian_2
│ ├── image_2
│ ├── velodyne
│ └── velodyne_reduced
Then I used the evaluate script like so:
python pytorch/train.py evaluate --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=./../models/car --det_dir=/KITTI_ROOT/testing/det_2/lsvm4_car_2 --pickle_result=False --predict_test=True
Then some results were saved in kitti format in the /path_to_frustum/Frustum-Pointpillars/models/car/predict_test/step_XXXXX
folder for the test set. I don't know how to evaluate these results. And I am not sure how to further evaluate these results. Do you have a script for that or can you point me in the direction how to proceed? Thanks in advance
Thanks for your reply, but I still show an error based on your prompts:
I used the evaluate script like so:
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=./model --ckpt_path=./model/voxelnet-29503.tckpt --ref_detfile=./rgb_detections/rgb_detection_val.txt --pickle_result=True
..........
ref_bboxes is none for image_idx: 7503
ref_bboxes is none for image_idx: 7504s][09:56>00:10]
ref_bboxes is none for image_idx: 7505
ref_bboxes is none for image_idx: 7506
ref_bboxes is none for image_idx: 7507
ref_bboxes is none for image_idx: 7508
ref_bboxes is none for image_idx: 7509
ref_bboxes is none for image_idx: 7510
ref_bboxes is none for image_idx: 7511
ref_bboxes is none for image_idx: 7512
ref_bboxes is none for image_idx: 7513
ref_bboxes is none for image_idx: 7514
ref_bboxes is none for image_idx: 7515
ref_bboxes is none for image_idx: 7516
ref_bboxes is none for image_idx: 7517
[100.0%][===================>][0.78it/s][10:06>00:01]
generate label finished(12.40/s). start eval:
avg forward time per example: 0.007
avg postprocess time per example: 0.019
Traceback (most recent call last):
File "pytorch/train.py", line 771, in
Can you see what went wrong?Thank you so much!
Thanks for your reply, but I still show an error based on your prompts:
I used the evaluate script like so:
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=./model --ckpt_path=./model/voxelnet-29503.tckpt --ref_detfile=./rgb_detections/rgb_detection_val.txt --pickle_result=True
.......... ref_bboxes is none for image_idx: 7503 ref_bboxes is none for image_idx: 7504s][09:56>00:10] ref_bboxes is none for image_idx: 7505 ref_bboxes is none for image_idx: 7506 ref_bboxes is none for image_idx: 7507 ref_bboxes is none for image_idx: 7508 ref_bboxes is none for image_idx: 7509 ref_bboxes is none for image_idx: 7510 ref_bboxes is none for image_idx: 7511 ref_bboxes is none for image_idx: 7512 ref_bboxes is none for image_idx: 7513 ref_bboxes is none for image_idx: 7514 ref_bboxes is none for image_idx: 7515 ref_bboxes is none for image_idx: 7516 ref_bboxes is none for image_idx: 7517 [100.0%][===================>][0.78it/s][10:06>00:01] generate label finished(12.40/s). start eval: avg forward time per example: 0.007 avg postprocess time per example: 0.019 Traceback (most recent call last): File "pytorch/train.py", line 771, in fire.Fire() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire target=component.name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "pytorch/train.py", line 758, in evaluate gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] File "pytorch/train.py", line 758, in gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] KeyError: 'annos'
Can you see what went wrong?Thank you so much!
Try --pickle_result=False
, this might solve it, but I am not sure.
Thanks for your reply, but I still show an error based on your prompts: I used the evaluate script like so:
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=./model --ckpt_path=./model/voxelnet-29503.tckpt --ref_detfile=./rgb_detections/rgb_detection_val.txt --pickle_result=True
.......... ref_bboxes is none for image_idx: 7503 ref_bboxes is none for image_idx: 7504s][09:56>00:10] ref_bboxes is none for image_idx: 7505 ref_bboxes is none for image_idx: 7506 ref_bboxes is none for image_idx: 7507 ref_bboxes is none for image_idx: 7508 ref_bboxes is none for image_idx: 7509 ref_bboxes is none for image_idx: 7510 ref_bboxes is none for image_idx: 7511 ref_bboxes is none for image_idx: 7512 ref_bboxes is none for image_idx: 7513 ref_bboxes is none for image_idx: 7514 ref_bboxes is none for image_idx: 7515 ref_bboxes is none for image_idx: 7516 ref_bboxes is none for image_idx: 7517 [100.0%][===================>][0.78it/s][10:06>00:01] generate label finished(12.40/s). start eval: avg forward time per example: 0.007 avg postprocess time per example: 0.019 Traceback (most recent call last): File "pytorch/train.py", line 771, in fire.Fire() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire target=component.name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "pytorch/train.py", line 758, in evaluate gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] File "pytorch/train.py", line 758, in gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] KeyError: 'annos' Can you see what went wrong?Thank you so much!Try
--pickle_result=False
, this might solve it, but I am not sure.
Thanks for your help ,but it shows the same error as above
Thanks for your reply, but I still show an error based on your prompts: I used the evaluate script like so:
python pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=./model --ckpt_path=./model/voxelnet-29503.tckpt --ref_detfile=./rgb_detections/rgb_detection_val.txt --pickle_result=True
.......... ref_bboxes is none for image_idx: 7503 ref_bboxes is none for image_idx: 7504s][09:56>00:10] ref_bboxes is none for image_idx: 7505 ref_bboxes is none for image_idx: 7506 ref_bboxes is none for image_idx: 7507 ref_bboxes is none for image_idx: 7508 ref_bboxes is none for image_idx: 7509 ref_bboxes is none for image_idx: 7510 ref_bboxes is none for image_idx: 7511 ref_bboxes is none for image_idx: 7512 ref_bboxes is none for image_idx: 7513 ref_bboxes is none for image_idx: 7514 ref_bboxes is none for image_idx: 7515 ref_bboxes is none for image_idx: 7516 ref_bboxes is none for image_idx: 7517 [100.0%][===================>][0.78it/s][10:06>00:01] generate label finished(12.40/s). start eval: avg forward time per example: 0.007 avg postprocess time per example: 0.019 Traceback (most recent call last): File "pytorch/train.py", line 771, in fire.Fire() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire target=component.name) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "pytorch/train.py", line 758, in evaluate gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] File "pytorch/train.py", line 758, in gt_annos = [info["annos"] for info in eval_dataset.dataset.kitti_infos] KeyError: 'annos' Can you see what went wrong?Thank you so much!Try
--pickle_result=False
, this might solve it, but I am not sure.Thanks for your help ,but it shows the same error as above
Ah now I see it. Check my command. You need to pass --det_dir
instead of --ref_detfile
. The full command as given above
python pytorch/train.py evaluate --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=./../models/car --det_dir=/KITTI_ROOT/testing/det_2/lsvm4_car_2 --pickle_result=False --predict_test=True
@xuyunf Hello!Did you solve this problem? If yes could you please share the solution to this problem?