Results 8 comments of morganh

If you are still meeting any error, please create topic with more details in https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/transfer-learning-toolkit/ If no issue now, please ignore my comment.

1) Yes 2) It is possible. Refer to https://forums.developer.nvidia.com/t/multi-stream-rtsp-on-jetson-nano/122357 3) Please create topic in DS forum https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15 4) For int8, there should be cal.bin, etlt model and your API key....

Please refer to https://forums.developer.nvidia.com/t/errors-tlt-export-tlt-yolo-model-to-int8-calibration/122787

We verify on dgpu machines only. You can refer to [dockerfile](https://github.com/NVIDIA-AI-IOT/NVIDIA-Optical-Character-Detection-and-Recognition-Solution/blob/main/triton/Triton_Server.Dockerfile.).

How did you generate the model.plan engine file ? Can you elaborate the steps?

Refer to https://forums.developer.nvidia.com/t/tao-retinanet-triton-server-deployment/215092 to check if it helps you.

You can firstly run the default steps in this github. Then, to run your own .etlt model in triton server, just need to replace your own .etlt model with the...

The post-processing is available at https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/postprocessing/yolov3_postprocessor.py