BEVFormer_tensorrt
BEVFormer_tensorrt copied to clipboard
How to use a different yolox weight other than the one provided to do quantization in the context?
What are the modifications needed for other yolox or yolov8 weights to work in the 2D quantization task?
I tried yolox_x_fast_8xb8-300e_coco_20230215_133950-1d509fab.pth from mmyolo github, yolox-s.pth from yolox official github. All of them give "The testing results of the whole dataset is empty.", after running "trt_evaluate_fp16.sh" for instance. I googled the error, and solution seems to be "reduce the learning rate". But how exactly could i do that for our case, our quantization task?