Local-LLM-Comparison-Colab-UI
Local-LLM-Comparison-Colab-UI copied to clipboard
Integrate with LiteLLM - Evaluate 100+LLMs, 92% faster
Hi @Troyanovsky @klipski I'm the maintainer of LiteLLM. we allow you to create a proxy server to call 100+ LLMs to make it easier to run benchmark / evals .
I'm making this issue because I believe LiteLLM makes it easier for you to run benchmarks and evaluate LLMs (I'd love your feedback if it does not)
Try it here: https://docs.litellm.ai/docs/simple_proxy https://github.com/BerriAI/litellm
Using LiteLLM Proxy Server
Creating a proxy server
Ollama models
$ litellm --model ollama/llama2 --api_base http://localhost:11434
Hugging Face Models
$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1
Anthropic
$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1
Palm
$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison
Using to run an eval on lm harness:
python3 -m lm_eval \
--model openai-completions \
--model_args engine=davinci \
--task crows_pairs_english_age