Yukai Yang (Alexis)
Yukai Yang (Alexis)
Unfortunately, I don't have much bandwidth right now. However, you are welcome to contribute.
The general workflow is correct. Also it is recommended to use the same ONNX -> TensorRT conversion. You also want to create yolox.py or yolor.py for ONNX to TRT conversion....
Performance looks similar to yolov4-csp-swish but it might be worth a try. You can try using torch2trt in yolox.py if it doesn’t add too many dependencies.
Use `app.py` as a starting point for any downstream application that uses FastMOT. Only modify `mot.py` or `tracker.py` if you want to alter the tracker logic/algorithms.
If you can't convert to ONNX, try [torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt)
> Hi, did you add the option of using the H264 stream? > > H264 is supported over RTSP stream. If you want to use udp, you can customize the...
Do your two RTSP streams have non overlapping views and the same frame rate?
Your case is doable but it requires a nontrivial amount of code changes. Track history needs to be shared across different MOT objects. I'll try to create a demo script...
I'm not familiar with the Deep Stream SDK. You might need to add an appsink in the middle of your pipeline to feed the frames into FastMOT. Related issue: #132
You can create 4 `fastmot.VideoIO` objects.