YOLO-World
YOLO-World copied to clipboard
TensorRT
Thanks for your awsome work. Could you release the TensorRT version of YOLO-World? The inference time of YOLO-World onnx is kind of slow.
Hi @RuipingL, thanks for your interest in YOLO-World! We will provide the TensorRT version of YOLO-World shortly. Currently, could you try to deploy to TensorRT with the ONNX model? If you have any questions or make any progress, feel free to open issues.
Hi @wondervictor, thanks for reply and happy Chinese New Year in advance. When transferring your model to ONNX format, an automatic dimension compression might occur. This can cause an error during the conversion process to TensorRT. P.S. We are using YOLO-World for an ongoing project urgently. We would appreciate it if you can prioritize this issue.
Following this as well
期待TensorRT版本,我有一个小疑问,请问预估能在Nvidia什么型号上的边缘设备进行25ms的推理呢?Orin Nx、Xavier Nx、Orin Nano?最后再次感谢你们精彩的工作!
any updates on this issue?
Maybe onnxruntime can use TensorRT ExecutionProvider. https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html
https://github.com/PrinceP/tensorrt-cpp-for-onnx?tab=readme-ov-file#yolo-world
Without dynamic batch, its working but onnx export has to be changed.