Lyu Han
Lyu Han
Thanks for your great suggestion. We are working on it. > `https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/prediction.md` > > 能否学习这个 > > 一个框架 全部模型 > > 直接选择模型的目录就行了 > > 从代码 到 预测结果 简单明了 > >...
I quickly browse https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html I think you probably need to implement `Custom Output Parsing` The `libmmdeploy_tesorrt_ops.so` is actually `IPlugin Implementation` Since I am not familiar with NVIDIA DeepStream, please allow...
Hi, @ziggy84 `mmdeploy` focuses on how to deploy PyTorch models in various devices. We would like to leave mmdeploy integration to the community's repo. It will be our honor to...
Close it for no activity for a long time. Feel free to reopen it if it is still an issue
@irexyc Let's check if mmdeploy-cuda11.1 prebuilt package works on cuda11.3. As I tested on Ubuntu platform, it worked.
Hi, we are working on the prebuilt package. You can find the status from PR #347. As for the compiled docker image, I am afraid you might meet with the...
> almost the same problem. a docker image from the docker hub is a better solution. can you supply it? OK. We will work on it.
@GeneralJing @zranguai Have you guys tried to build docker image passing args `USE_SRC_INSIDE=true`? If it is turned on, aliyun source will be adopted.
> yes, i let the USE_SRC_INSIDE=true. the files listed above can be downloaded. but when going to download mmdeploy, it stucked. I commented the last few steps, and when the...
Sorry, folks. The built image is too large, over 14G. #683 We haven't figured out a good solution yet.