taozhiyuai

Results 117 comments of taozhiyuai

> An alternative solution could be to use the proxy URL that leads to Azure OpenAI, assuming that this project can deploy the specific URL designated for OpenAI. can you...

I just launch 50 rounds for two models. the result shows who is a better models. at the moment, Gemma 7B is the best. v1.1 is worse.

in my experience, yes. small model has high token/second, always generate actions. while big model waits for tokens to know how to re-act. @_@

The record show small model can generate more actions with high token/second 0.5b wins 3 rounds! Player 1 using: ollama:qwen:14b-chat-v1.5-fp16 Player 2 using: ollama:qwen:0.5b-chat-v1.5-fp16 Round 1 🏟️ (0647) (0)Starting game...

![WechatIMG83](https://github.com/OpenGenerativeAI/llm-colosseum/assets/146583103/af77a716-3266-4584-9936-f5fe4aa8f760) win rate 44% after 50 rounds @oulianov

我填入了kimi和chatglm的key, 但是,模型列表还是灰色,点击不出列表,无法选择模型, 只是显示gpt 3.5

我没有执行 ‘export OLLAMA_HOST=0.0.0.0 ollama serve’ 以上错误是否和这个有关系? 我是MAC, 目前ollama会自动在tray里面出现图标,自动做服务器,open webui等都很正常.不需要ollama serve. 运行你这个项目必须要改host?对其他APP有影响吗? 我在docker运行dify 也是用局域网IP就行了,没有改HOST的呀

到底要如何设置

ollama , lm studio都没有问题