cheng358

Results 6 comments of cheng358

> MiniCPM-o 2.6 请参考这部分进行 [vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm) 部署 我是参考这部分部署的,是使用vllm的api_server部署的,服务也能部署成功,我试了下纯聊天的可以,但是输入图片就报错了

> > > MiniCPM-o 2.6 请参考这部分进行 [vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm) 部署 > > > > > > 我是参考这部分部署的,是使用vllm的api_server部署的,服务也能部署成功,我试了下纯聊天的可以,但是输入图片就报错了 > > Sry for it... We've fixed this. Please try it again. Do I need...

> Make sure your code are from our repository(the forked vllm) and in branch `minicpmo`. Then pull the new code and run it again. Okay, I'll give it a try

Audio has the same issue input: { "model": "MiniCPM", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "分析下音频的内容" }, { "type": "audio_url", "audio_url": { "url": "http://localhost:8080/demo.wav" }...

> 感谢分享,我们更新一下 github 的示例代码,huggingface 的代码会完整一些 vllm运行minicpm-o-2.6的demo有吗,我按照readme的方式部署了下,调用的时候各种报错,按照文档的格式调不通

> > 感谢分享,我们更新一下 github 的示例代码,huggingface 的代码会完整一些 比如如下请求 curl --location --request POST 'http://101.230.144.224:12341/v1/completions' \ --header 'Content-Type: application/json' \ --data-raw '{ "model": "MiniCPM", "messages": [ { "role": "user", "content": [ { "type":...