Daniel Frías Balbuena
Daniel Frías Balbuena
Hi, I'm trying to export RTMO model to TensorRT and I get the an error. My code is the following one: ``` from mmdeploy.apis import torch2onnx from mmdeploy.apis.tensorrt import onnx2tensorrt...
With RTMPose-M it works, this is my what i change from my last code: ``` img = 'mmdeploy/demo/resources/human-pose.jpg' work_dir = 'work_dir/trt/rtmpose-M' save_file = 'end2end.onnx' deploy_cfg = 'mmdeploy/configs/mmpose/pose-detection_tensorrt_dynamic-256x192.py' model_cfg = 'mmpose/configs/body_2d_keypoint/rtmpose/coco/rtmpose-m_8xb256-420e_coco-256x192.py'...
With RTMDet Nano I get the same error that RTMO This is what I change: ``` img = 'mmdeploy/demo/resources/human-pose.jpg' work_dir = 'work_dir/trt/rtmdet-nano' save_file = 'end2end.onnx' deploy_cfg = 'mmdeploy/configs/mmdet/detection/detection_tensorrt_static-320x320.py' model_cfg =...
Hi, I have the same issue. I tried it with a a video of duration 6:53 so I typed in the json this one ("fps": 25, "length_secs": 413) and I...
Hi @paulxinzhou. The number of seconds is 413.88, so I wrote 413 in the json. I printed out the length of the `frames` variable from File "/content/Soccernet features/paddlevideo/loader/pipelines/sample_one_file.py" and I...
Hi @paulxinzhou. After several tests I came to the following conclusions: For videos with mp4 format, the maximum duration allowed is the duration in seconds * the number of frames...
It would be incredible if a line came out of each pie and its label is indicated.
> @Daanfb what GPU and pytorch version are you using? I was using the CPU and my pytorch version is 2.2.0
> What TRT engine are you using? My TensorRT version is 8.6.1. I have updated the post the the correct .engine and .onnx file names that I put wrong
@BloodAxe When I exported the model the confidence_threshold=0.05. I have just printed out the batch scores and I get 0, so maybe it could be the answer to my question....