Chen Xin
Chen Xin
Can you try to set config according to https://github.com/open-mmlab/mmdeploy/issues/261
The problem is due to the cuda version. If you use cuda11+, you won't meet ` (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed. )`
How do you test a model with mmseg? If you use `tools/test.py` or `test/profile.py` to test a model on mmdeply, it only count the `model inference` time, `preprocess` and postprocess`...
@jyang68sh Sorry for late reply, the difference is mainly due to the different backend (pytorch vs tensorrt). I test this [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/stdc/stdc2_512x1024_80k_cityscapes.py) on 2070s. With benchmark.py, the qps is 53.90. If...
You may first check if the file is exists:  If it still can't convert a model, you can share your full convert command.
Seems similiar to songxian's problem. Could you please take a look at this issue? @grimoire
> @irexyc Let's check if mmdeploy-cuda11.1 prebuilt package works on cuda11.3. As I tested on Ubuntu platform, it worked. I checked, and it worked on windows
Can you print the result of ```python tools/check_env.py```
We will check the python inference trt problem. For the onnxruntime error, it seems that you didn't build mmdeploy with onnxruntime. If you want to build mmdeploy with trt and...
I don't know how to reproduce your problem. Because If you use the precompiled package, `python api(if you run python object_detection.py)` and `c api` should load same libraries so it...