AllentDan

Results 21 issues of AllentDan

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand...

Hi, I realeased a C++ segmentation library based on libtorch (or pytorch c++), [LibtorchSegmentation](https://github.com/AllentDan/LibtorchSegmentation). Could you please add it?

大佬你好。我运行同样的代码在一个tfrecord文件时正常运行。在另一个tfrecord文件时报错,不知道这是什么原因?用的multi_gpu_train,两块1080ti卡。 Traceback (most recent call last): File "/home/allent/anaconda3/envs/tensorflow_py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call return fn(*args) File "/home/allent/anaconda3/envs/tensorflow_py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/allent/anaconda3/envs/tensorflow_py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1350, in...

Please note that the results of the SDK classification demo will be different from mmcls as ResizeEdge is replaced to Resize.

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand...

你好。在给 [mmdeploy](https://github.com/open-mmlab/mmdeploy) 支持rockchip, rk3588s 的时候,1.2版本的toolkit2中支持的op包括了ArgMax,但是用ONNX转模型OK,运行时产生如下错误: ``` I Connect to Device success! I NPUTransfer: Starting NPU Transfer Client, Transfer version 2.1.0 (b5861e7@2020-11-23T11:50:36) D NPUTransfer: Transfer spec = local:transfer_proxy D NPUTransfer: Transfer...

### Prerequisite - [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmyolo/issues) but cannot get the expected help. - [X] I have read the [FAQ documentation](https://mmyolo.readthedocs.io/en/latest/faq.html) but cannot get the...

Feature:P1

[LMDeploy](https://github.com/InternLM/lmdeploy), as an AI deployment platform supporting multiple backend services, has always been committed to providing fast and stable AI model deployment services. Now, it supports accelerating the inference and...

`model_zoo/OpenAI/clip-vit-large-patch14-336` and `model_zoo/OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup` from https://huggingface.co/YanweiLi/Mini-Gemini-7B/blob/main/config.json#L31 What is the exact model path of the huggingface?

[LMDeploy](https://github.com/InternLM/lmdeploy) is a toolkit for compressing, deploying, and serving LLMs. An alternative to inference tool integration, LMDeploy obtains 14.42 qps performance on A100 for the llama 7b model according to...