WEBUI bug, I can't choose Model Engine , so I can't download LLM models
System Info / 系統信息
cuda 12.1 python 3.10.1
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [X] pip install / 通过 pip install 安装
- [ ] installation from source / 从源码安装
Version info / 版本信息
xinference, version 0.13.0
The command used to start Xinference / 用以启动 xinference 的命令
启动服务
xinference-local --host 0.0.0.0 --port 9997
Reproduction / 复现过程
just open http://localhost:9997/ui/#/launch_model/llm
Expected behavior / 期待表现
fix the bug, so I can download llms
Can you use other LLMs?
当然都不行, all llms are the same
后端有报错吗?
后端有报错吗?
启动服务
xinference-local --host 0.0.0.0 --port 9997
2024-07-25 15:30:47,788 xinference.core.supervisor 3317257 INFO Xinference supervisor 0.0.0.0:19660 started 2024-07-25 15:30:47,973 xinference.core.worker 3317257 INFO Starting metrics export server at 0.0.0.0:None 2024-07-25 15:30:47,976 xinference.core.worker 3317257 INFO Checking metrics export server... 2024-07-25 15:30:50,897 xinference.core.worker 3317257 INFO Metrics server is started at: http://0.0.0.0:37991 2024-07-25 15:30:50,899 xinference.core.worker 3317257 INFO Xinference worker 0.0.0.0:19660 started 2024-07-25 15:30:50,899 xinference.core.worker 3317257 INFO Purge cache directory: /home/ubuntu/.xinference/cache 2024-07-25 15:30:54,046 xinference.api.restful_api 3316310 INFO Starting Xinference at endpoint: http://0.0.0.0:9997 2024-07-25 15:30:54,175 uvicorn.error 3316310 INFO Uvicorn running on http://0.0.0.0:9997 (Press CTRL+C to quit)
之前在另一台机器上部署,没有问题,只是添加不到ragflow里面。换了台机器,同样按步骤,结果好多bug,加载模型都加不了。
should be the python packages enviroment proplem
你们团队没人解决吗?
现在 optional 里有 download_hub,也可以 model_path 添加自己下载的模型。不清楚你说的什么问题。
webui
当我点击模型相关选项时,例如我图片里面的model engine时,没有选项弹出,导致无法下载也无法加载已有的模型。
并且我使用docker和pip install 都是同样的bug,点击选项没有弹出下拉框也无法输入
This issue is stale because it has been open for 7 days with no activity.