GXKIM

Results 45 comments of GXKIM

现在是不是只能使用vllm lmploy这种,ollama lmstuio加载你们训练的模型,是无法使用backend = "vlm-http-client"是吗

Wait, that’s not right. vlm-transformers refers to an online model from Hugging Face. Since I’m running a model locally with LM Studio, should I be using vlm-http-client instead? This is...

###env ‘USE_MINERU=true MINERU_EXECUTABLE="$HOME/uv_tools/.venv/bin/mineru" MINERU_DELETE_OUTPUT=0 # keep output directory MINERU_BACKEND=vlm-http-client # or another backend you prefer MINERU_SERVER_URL=http://10.xxxx:30000’ ###miner u server I’m trying to use a VLM for multimodal parsing, but I’m...

> ### Describe your problem > 我使用docker compose顺利完成了ragflow的的部署,并且使用ollama完成了添加模型,测试任务正常,我现在想使用Xinference框架进行模型管理,当我添加的时候报错Fail to access embedding model(bge-large-zh-v1.5).Connection error. > > ![Image](https://github.com/user-attachments/assets/bb051023-f1ea-41b1-8cc5-40865d6a82cf) > > ![Image](https://github.com/user-attachments/assets/d270ad71-0d9b-4ac5-93bd-7c80bcceaf18) 我查看了ragflow-server的日志: NoneType: None 2025-02-28 10:04:43,961 INFO 14 172.19.0.6 - - [28/Feb/2025...

yes luohao-svg ***@***.***>于2025年4月15日 周二16:02写道: > @channingy Is it solved? > > — > Reply to this email directly, view it on GitHub > , > or unsubscribe > > ....