Samuel Mohebban
Samuel Mohebban
@marcoslucianops ``` WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars WARNING: Deserialize engine failed because file path: best_ap-1-fp16.engine...
@marcoslucianops After running another test I believe there may be an issue with the new deepstream 6.3 implementation. Below is a config that I used for _**both**_ 6.2 and 6.3...
x86 - 3090 with Driver Version 525.125.06 + Cuda 12.1 I cannot send the exact model but I can prepare one that should be identical. It will likely be next...
> The `export.py` from YOLOv5 repo doesn't work with the DeepStream-Yolo, You should use the `export_yoloV5.py` from `DeepStream-Yolo/utils`. Are you training your model with image normalization in the pre-processing? Apologies...
@marcoslucianops confirmed with our team that we have not messed with normalization, so it's the same as default ultralytics. We're you able to reproduce the issue?
1) Pull master branch of this repository 2) pull yolov5l.pt weights 3) Pull master branch of ultralytics, and run pip install onnx onnxruntime 4) convert yolov5l.pt weights to yolov5.onnx, with...
@marcoslucianops thanks for checking. Did you use the same pgie.txt config I dropped above? If not can you drop the config you used? Again, really appreciate your help with this.
> I didn't make branches because it's better to me to maintain only one branch updated with the news. > > About the ONNX, in my tests, the mAP was...
New Release ``` [property] gpu-id = 0 model-color-format = 0 labelfile-path = labels.txt uff-input-blob-name = input_image process-mode = 1 num-detected-classes = 2 interval = 0 batch-size = 1 gie-unique-id =...
okay I will try that. Having the model name not present in the cfg has never been an issue. If it was wouldn't it throw an error?