Minamiyama
Minamiyama
> > > > > `pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel` > > > > > > > > > Just run > > > ``` > > > docker run --gpus all pytorch/pytorch:2.1.2-cuda12.1-cudnn8-devel tail...
It has high performance and low cost, espacially useful on enterprice scenes😆
Is there any progress of this problem? When this problem can be fixed? 😞 Thanks
> @Minamiyama Do you have interest to submit a PR to add xinference support for FastGPT? yes, I'd like to ;-p
> 启动 embedding 模型 bge-large-zh 报错  好像是目前不支持在一张卡上同时启多个模型,不知道能否放宽限制,至少给同时启一个LLM和一个embedding @UranusSeven
0.11.1 is ok