Sushma

Results 5 comments of Sushma

> detection_tensorrt_static-640x640.py换成dynamic320x320-1344x1344的试试 I get the same error: ``` mmengine - ERROR - C:\Users\.conda\envs\openmmlab2\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.pytorch2onnx.torch2onnx` with Call id: 0 failed. exit. ```

> I encountered the same issue. Has it been resolved? No, it hasn't been resolved. Tried with different input sizes, both static and dynamic, but I get the same error!

> #2819 He solved this problem by pip install onnxruntime==1.16.0, I tried it and it worked, but there will be new problems in model inference [2024-12-02 21:55:54.796] [mmdeploy] [error] [ort_net.cpp:205]...

> I rebuilt the environment and successfully resolved the TensorRT version deployment issue. I suspect the previous problem occurred because I deployed the ONNX version (CPU) first, and then deployed...

> Of course, I hope it can help you. > > ```python > 12/04 20:33:02 - mmengine - INFO - > 12/04 20:33:02 - mmengine - INFO - **********Environmental information**********...