mini-chat-langchain
mini-chat-langchain copied to clipboard
👶 Mini Chat LangChain
A minimal agentic implementation of Chat LangChain that can answer questions about LangGraph. Uses a small model and no indexing or vectorstores, just LangGraph's LLMS.txt file!

To improve the correctness of generated code, Mini Chat LangChain verifies the correctness of generated code via a typechecking OpenEvals evaluator used "in-the-loop". It first extracts generated code from the agent's output, then pushes it up to an E2B sandbox that installs required packages and runs pyright over it. If this check fails, it feeds the logs back to the original agent as a reflection step so that it can fetch more information about code structure as required.
Installation
First, clone this repo:
git clone https://github.com/jacoblee93/mini-chat-langchain.git
cd mini-chat-langchain
First, copy the .env.example file to .env, then set the following environment variables:
Next, set the following environment variables:
export ANTHROPIC_API_KEY="YOUR_KEY_HERE"
export E2B_API_KEY="YOUR_KEY_HERE"
You can sign up and obtain an Anthropic key here and an E2B key here.
You can also set up LangSmith tracing if desired by setting your API key and enabling tracing:
export LANGSMITH_API_KEY="YOUR_KEY_HERE"
export LANGSMITH_TRACING=true
This repo is set up to use uv. Run uv sync to install require deps:
uv sync
# Or, if you prefer not to use uv:
# pip install
Trying it out
You can run uv run langgraph dev to open your graph in LangGraph Studio.
If you do not want to run this project using LangGraph Studio, you will need to run the included agents by importing them as modules.
You can also run experiments with the agent by running the tests in the tests/ folder.
tests/test_base_agent.py runs without the reflection step and grades its output using the typechecking evaluator, while tests/test_agent.py runs the full agent.
Thank you!
This repo is meant to provide inspiration for how running evaluators "in-the-loop" as part of your agent can help improve performance. If you have questions or comments, please open an issue or reach out to us @LangChainAI on X (formerly Twitter)!