DrPatrickLi

Results 5 comments of DrPatrickLi

@conan2333 There are several things you can have a try to boost the inference speed. 1) replace a lightweight model as the backbone, e.g. MobileNetV3 2) network pruning 3) deploy...

Please refer to #6 when you want to export the model to onnx. And also welcome to pull request if you have finished it.

@XuYunqiu Yunqiu, Can you help to look at this issue?

1) The current code base adopts the implementation of InplaceSyncBN from https://github.com/mapillary/inplace_abn. The compiling of the InplaceSyncBN operator will be automatically triggered when running the code. 2) Our former codebase...

You don't need to install inplace_abn by yourself. The current code base already contains the inplace_abn implementation in ./modules. And the compiling of the InplaceSyncBN operator will be automatically triggered...