Li Zhang
Li Zhang
This is why there are warm-up iterations in speed benchmarks. The GPU frequency is lower when it's in an idle state. You may try to lock the GPU performance level...
In my test results, inserting a 2 second interval incurs 10-30% more latency.
Hi, What's your Ubuntu version and how did you install GCC-7?
I followed the instructions in the doc using a fresh Ubuntu 20.04 environment and can't reproduce the problem. The symbol `_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE6resizeEmc` demangles to `std::__cxx11::basic_string::resize(unsigned long, char) ` which is from...
Deploying SOT algorithms from mmtracking may not be easy. As you can see, mmtracking split them into 2 functions: `init` and `track`. Although they share some common logic, many steps...
Sorry for the confusion. The API for inference backend and post-processing is not part of the public API and is evolving in a fast pace.
Hi, can you check if #839 solves your problem?
> * How to set the batch size? `NetModule` has nothing to do with batch size for now. > * If one detection task is followed by two classification tasks,...
You may find this helpful https://github.com/open-mmlab/mmdeploy/issues/839#issuecomment-1206029364
The segmentation model is large enough to saturate your device, don't expect large speed up using batch inference.