AllentDan

Results 78 comments of AllentDan

Hi, @timothylimyl. Seems `No CUDA runtime is found` during building MMCV with TensorRT. Maybe you can refer to [mmdeploy](https://github.com/open-mmlab/mmdeploy) and its `Dockerfile` for some luck.

That seems to be the env variable should be specified in the Dockerfile. Try the methods from [mmdet issue 281](https://github.com/open-mmlab/mmdetection/issues/281) and envs in [mmdeploy dockerfile](https://github.com/open-mmlab/mmdeploy/blob/master/docker/GPU/Dockerfile) please.

> Hi @AllentDan , my previous dockerfile already has those env `ENV FORCE_CUDA="1"` and `ENV DEBIAN_FRONTEND=noninteractive` > > Edit: add my requirements.txt for completeness: > > ``` > --find-links https://download.pytorch.org/whl/torch_stable.html...

I mean: ```shell RUN sed -i '144,145d' setup.py && sed -i '142 i\ \ \ \ \ \ \ \ tensorrt_lib_path = "/usr/lib/x86_64-linux-gnu/"' setup.py ``` And make sure `RUN python...

Hi, please refer to the closed issue in MMOCR [here](https://github.com/open-mmlab/mmocr/issues/678). MMOCR recog models do not fully support dynamic batch inference because of `valid_ratios`.

May update the building way of the dockerfile as well.

Satrn accepts 3-channel inputs. Please use `configs/mmocr/text-recognition/text-recognition_tensorrt_static-32x32.py` instead.

Oh, please use satrn_small instead. 2GB is the limit size of ONNX protobuf.

Even if it is fine on another computer, that does not mean it is right. Like I said, if you want to use ONNXRuntime to do the inference, just use...