Lyu Han
Lyu Han
要不你试试源码编译?我们没有euleros的环境
"finish_reason": "length" 表示模型一直在生成,停不下来 模型是官方的模型么? 复现的方式能不能提供下呢?比如 prompt数据,脚本之类的
能不能麻烦你升级到 v0.3.0 再试试?我们修了一些bug,可能会解决你遇到的问题。 如果还是老样子,我再找下 v100 服务器,尝试复现看看
预先为kv cache分配的缓存
https://lmdeploy.readthedocs.io/en/latest/inference/pipeline.html#usage
使用的是 lmdeploy 的哪个接口呢?
This issue is resolved in the latest version v0.4.1. May upgrade lmdeploy to v0.4.1 and try it again
Hi, @shur-complement Thanks for pointing this issue out. Since vision models might bring in unnecessary dependencies for LLM models, we let users handle them case by case. We have to...
Can `--no-deps` eliminate conflicts?
That's very kind of you. Look forward to your PR