q yao
q yao
Hi I can convert on cuda=10.2 tensorrt=7.1.3.4 GPU=1080ti. Guess it is related to gpu type or cuda version. Sorry, I don't have any solution for now. I will keep tracing...
You can read [getting_started.md](https://github.com/grimoire/mmdetection-to-tensorrt/blob/master/docs/getting_started.md) and [demo/inference.py](https://github.com/grimoire/mmdetection-to-tensorrt/blob/master/demo/inference.py) for detail. This project is my "part time" project and there are still so much to do. Sorry I don't have enough time to...
Hi The interpolate layer in TRT is [IResizeLayer](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Graph/Layers.html#iresizelayer), Which only have interpolate mode (`NEAREST`, `LINEAR`). pytorch have 5 different interpolate mode(`nearest`, `linear` (3D-only), `bilinear`, `bicubic` (4D-only), `trilinear` (5D-only)). And the...
Hi, Could you please provide the config of your model?
Errr... `mmdetection-to-tensorrt` focus on convert object detection and instance segmentation model in `mmdetection`(only) to `TensorRT`, `torch2trt` is a more general tool for `PyTorch` to `TensorRT`. Actually, `mmdetection-to-tensorrt` is based on...
MMDetection has changed a lot after I finish the mask export. I will try to fix it this weekend. Mask output is 28*28 since I do not include the post-process...
No, That's an attractive idea, But I'm afraid I do not have enough time to finish it. I plan to add a tutorial about how to add convertor or module...
@animikhaich Sorry I do not have experience on Triton inference Server. If you want to deploy mmdetecton without TRT, you can launch an issue on mmdetecton repo. Hope they can...
Dynamic input shape did need more memory. If your input image have a fix shape, such as 800*1088, set the opt_shape_param as follow should reduce memory usage: ``` opt_shape_param=[ [...
Hi, I have tested on 1660ti and 2070s, with both PyTorch 1.8 and 1.9, TensorRT8.0.3, the code works on my side. Can you provide a dockerfile (you can start from...