damn-vulnerable-llm-agent icon indicating copy to clipboard operation
damn-vulnerable-llm-agent copied to clipboard

Add support for Ollama and Hugging Face via Langchain + LiteLLM

Open deepthirera opened this issue 11 months ago • 0 comments

Description:

This PR introduces support for running the app with local models using Ollama and hosted models via Hugging Face, alongside the existing GPT-4 support. To enable this, I have used LiteLLM, allowing seamless switching between model providers.

Changes:

Introduced langchain-litellm Added support for: Local models via Ollama Hosted models via Hugging Face Inference API Updated README.md with setup instructions and usage examples Added example .env templates for each provider configuration

This sets the foundation for more flexible and cost-effective LLM deployments while retaining compatibility with GPT-4o.

deepthirera avatar May 10 '25 02:05 deepthirera