Tyler-D
Tyler-D
Emmm, the default behavior is like this. The `train_config` is not needed in export phase so we just remove it. You can see there is a `export_rgb.yaml` for export in...
Hi @jdaviddx , Here is the per-epoch log when I trained YOLOV4 on coco with full 80 classes (Evaluation mode is in SAMPLE mode, so the mAP is slightly worse...
No. For SOTA training, we use the imagenet pretrained cspdarknet53. The pretrained models on NGC are trained on Openimage
Did you run with TensorRT sample ? What's the model you used ? And did you try int8 on dGPU ?
OK. Then it is potentially a L4T TensorRT issue. So my suggestion is that you can try to pick some layers and set the precision to be fp16.
This is mostly due to the wrong text detection in OCDNet. I suggest you dump the bbox predicted by OCDNet at this point: https://github.com/NVIDIA-AI-IOT/NVIDIA-Optical-Character-Detection-and-Recognition-Solution/blob/main/src/OCDNetEngine.cpp#L213 Check and visualize them on the...
@morganh-nv @Bin-NV to check TritonServer issue
@lakshanthad You don't need to run the export script on Jetson NANO. You can run export on host machine (even without GPU) and run inference on NANO
If you want to run GPU NMS in DeepStream, you could follow the following link: https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization#deepstream-deployment https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps