How to modify the detection distance of the model?
HI!At present, the model detection distance is only 54m. I want to modify the range of 100 meters. This part of the code has been implemented on Python, but when the deployment stage is transferred BUG,As a result, the model accuracy decreases seriously. plz,thanks If you have any questions, you can find me the mail: [email protected]
I meet same problem.
I changed below configuration when training pytorch model.
voxel_size: [0.075, 0.1389, 0.2]
point_cloud_range: [-54.0, -100.0, -5.0, 54.0, 100.0, 3.0]
model:
encoders:
vtransform:
xbound: [-54.0, 54.0, 0.3]
ybound: [-100.0, 100.0, 0.5556]
heads:
object:
bbox_coder:
post_center_range: [-61.2, -120.0, -10.0, 61.2, 120.0, 10.0]
And I changed the python.cpp since I use trt engine in python.
voxelization.min_range = nvtype::Float3(-54.0f, -100.0f, -5.0);
voxelization.max_range = nvtype::Float3(+54.0f, +100.0f, +3.0);
voxelization.voxel_size = nvtype::Float3(0.075f, 0.1389f, 0.2f);
geometry.xbound = nvtype::Float3(-54.0f, 54.0f, 0.3f);
geometry.ybound = nvtype::Float3(-100.0f, 100.0f, 0.5556f);
transbbox.pc_range = {-54.0f, -100.0f};
transbbox.post_center_range_start = {-61.2, -120.0, -10.0};
transbbox.post_center_range_end = {61.2, 120.0, 10.0};
transbbox.voxel_size = {0.075, 0.1389};
When I visualize prediction result of pytorch model, results look fine. However, Results from libpybev.so are very poor.
If I don't change any config as original repo, the performance is stable.
I meet same problem.
I changed below configuration when training pytorch model.
voxel_size: [0.075, 0.1389, 0.2] point_cloud_range: [-54.0, -100.0, -5.0, 54.0, 100.0, 3.0] model: encoders: vtransform: xbound: [-54.0, 54.0, 0.3] ybound: [-100.0, 100.0, 0.5556] heads: object: bbox_coder: post_center_range: [-61.2, -120.0, -10.0, 61.2, 120.0, 10.0]And I changed the
python.cppsince I use trt engine in python.voxelization.min_range = nvtype::Float3(-54.0f, -100.0f, -5.0); voxelization.max_range = nvtype::Float3(+54.0f, +100.0f, +3.0); voxelization.voxel_size = nvtype::Float3(0.075f, 0.1389f, 0.2f); geometry.xbound = nvtype::Float3(-54.0f, 54.0f, 0.3f); geometry.ybound = nvtype::Float3(-100.0f, 100.0f, 0.5556f); transbbox.pc_range = {-54.0f, -100.0f}; transbbox.post_center_range_start = {-61.2, -120.0, -10.0}; transbbox.post_center_range_end = {61.2, 120.0, 10.0}; transbbox.voxel_size = {0.075, 0.1389};When I visualize prediction result of pytorch model, results look fine. However, Results from
libpybev.soare very poor.If I don't change any config as original repo, the performance is stable.
results from libpybev.so are very poor. It is possible that there exists a softmax operation in the head.onnx model which may lead to precision overflow when undergoing fp16 conversion.
I meet same problem. I changed below configuration when training pytorch model.
voxel_size: [0.075, 0.1389, 0.2] point_cloud_range: [-54.0, -100.0, -5.0, 54.0, 100.0, 3.0] model: encoders: vtransform: xbound: [-54.0, 54.0, 0.3] ybound: [-100.0, 100.0, 0.5556] heads: object: bbox_coder: post_center_range: [-61.2, -120.0, -10.0, 61.2, 120.0, 10.0]And I changed the
python.cppsince I use trt engine in python.voxelization.min_range = nvtype::Float3(-54.0f, -100.0f, -5.0); voxelization.max_range = nvtype::Float3(+54.0f, +100.0f, +3.0); voxelization.voxel_size = nvtype::Float3(0.075f, 0.1389f, 0.2f); geometry.xbound = nvtype::Float3(-54.0f, 54.0f, 0.3f); geometry.ybound = nvtype::Float3(-100.0f, 100.0f, 0.5556f); transbbox.pc_range = {-54.0f, -100.0f}; transbbox.post_center_range_start = {-61.2, -120.0, -10.0}; transbbox.post_center_range_end = {61.2, 120.0, 10.0}; transbbox.voxel_size = {0.075, 0.1389};When I visualize prediction result of pytorch model, results look fine. However, Results from
libpybev.soare very poor. If I don't change any config as original repo, the performance is stable.results from libpybev.so are very poor. It is possible that there exists a softmax operation in the
head.onnxmodel which may lead to precision overflow when undergoing fp16 conversion.
I'm not sure, i modify config voxel_size: [0.075, 0.075, 0.2] point_cloud_range: [-90, -90.0, -5.0, 90.0, 90.0, 3.0]
model: encoders: vtransform: xbound: [-90.0, 90.0, 0.3] ybound: [-90.0, 90.0, 0.3]
i got head bev shape [1,512,600,600],So when I run the headonnx.py file, it is best to have a lot of nan value when libpybev verification
Could you share headonnx.py to verify this?
I've built libpybev.so again and got somewhat reasonable results.
The problem is that the performance drop between the PyTorch (.pth) model and shared object from TensorRT engine (.so) is significant.
In the case of Car class on my custom datasets, mAP of PyTorch model is 52.8 and that of .so is 20.3.
additional questions:
- What does "it is best to have a lot of nan value when libpybev verification" mean?
- How to prevent the precision overflow in
head.onnxmodel?