lewes6369

Results 48 comments of lewes6369

As your model only output for one class. Just modify the [CLASS_NUM](https://github.com/lewes6369/tensorRTWrapper/blob/0aaab5110d0794c7c374c7f46fbde2050b459556/code/include/YoloConfigs.h#L9) in the "YoloConfigs.h" file ,and run the cmd with --class=1

Are they same calibrations data feeding? And are they the same tensorRT version? My int8 engine by 1060 is also the 60M. Maybe your TiTAN X is created as mode...

I have 416 input model not the 418 size. And it is already uploaded to the google drive.

Because the TensorRT parser can not handle the negative slop directly ( the leakyRelu vesion). So I add it as the plugin. As written in the tensorrt header, the PReLu...

Yes, in the yolov3 model , the relu layer is actually the leaky relu layer. And the upsample layer is not supported by the default TensorRT, so add it as...

I am not sure what is the issue about the parsing. Can you tell me what the model and prototxt you used? It seems that your input caffemodel does not...

Hi, @cong235 ,I am happy for the help to your work. You can train the darknet model use the official yolov3 git :https://github.com/pjreddie/darknet. Next convert them to caffemodel by [git](https://github.com/marvis/pytorch-caffe-darknet-convert...

Hi,@cong235 . Do you modify the CLASS_NUM in file `tensorRTWrapper/code/include/YoloConfigs.h` to class one? Not only the cmd but also this header need to change the class num. I will merge...

Yes. If you did not coding for cuda, you have to run the custom layer on cpu and it will cost time over the communication between cpu memory and gpu...

Convert to engine TRT ,all support layers will run on GPU. The customer layer can be run both cpu and gpu by your implementation. If you want to run on...