BotSharp
BotSharp copied to clipboard
Error Fetching Models
Hi, I managed to build bot sharp and got the UI running. Can this load many of the hugging face models? I was hoping I could run local models off of my system. I seem to get this error and I could not figure out how to change the path to models. I don't want to use OpenAI.
I think the most completeness of LLM backend is Azure OpenAI.
So it's a complete waste of time to use this, unless I have Azure OpenAI?
Hi @Webslug @hchen2020 I’m the maintainer of LiteLLM (abstraction to call 100+ LLMs)- we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not)
Try it here: https://docs.litellm.ai/docs/proxy_server https://github.com/BerriAI/litellm
Using LiteLLM Proxy Server
import openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
Creating a proxy server
Ollama models
$ litellm --model ollama/llama2 --api_base http://localhost:11434
Hugging Face Models
$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1
Anthropic
$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1
Palm
$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison
Sorry I gave up on this as this was getting too confusing and I just recreated my own console discord AI bot in the end.
Thanks for the help and for the project.