damn-vulnerable-llm-agent
damn-vulnerable-llm-agent copied to clipboard
Add support for Ollama and Hugging Face via Langchain + LiteLLM
Description:
This PR introduces support for running the app with local models using Ollama and hosted models via Hugging Face, alongside the existing GPT-4 support. To enable this, I have used LiteLLM, allowing seamless switching between model providers.
Changes:
Introduced langchain-litellm Added support for: Local models via Ollama Hosted models via Hugging Face Inference API Updated README.md with setup instructions and usage examples Added example .env templates for each provider configuration
This sets the foundation for more flexible and cost-effective LLM deployments while retaining compatibility with GPT-4o.