langchain
langchain copied to clipboard
feat(community): add tools support for litellm
I used the following example to validate the behavior
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
@tool
def multiply(x: float, y: float) -> float:
"""Multiply 'x' times 'y'."""
return x * y
@tool
def exponentiate(x: float, y: float) -> float:
"""Raise 'x' to the 'y'."""
return x**y
@tool
def add(x: float, y: float) -> float:
"""Add 'x' and 'y'."""
return x + y
prompt = ChatPromptTemplate.from_messages([
("system", "you're a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
tools = [multiply, exponentiate, add]
llm = ChatAnthropic(model="claude-3-sonnet-20240229", temperature=0)
# llm = ChatLiteLLM(model="claude-3-sonnet-20240229", temperature=0)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241", })
ChatAnthropic version works:
> Entering new AgentExecutor chain...
Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: [{'text': 'To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:', 'type': 'text', 'index': 0}, {'id': 'toolu_01Gf54DFTkfLMJQX3TXffmxe', 'input': {}, 'name': 'exponentiate', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 5, "y": 2.743}'}]
82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`
responded: [{'id': 'toolu_01XUq9S56GT3Yv2N1KmNmmWp', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 0, 'partial_json': '{"x": 3, "y": 82.65606421491815}'}]
85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded: [{'text': '\n\nSo 3 + 5^2.743 = 85.66\n\nTo calculate 17.24 - 918.1241, we can use:', 'type': 'text', 'index': 0}, {'id': 'toolu_01BkXTwP7ec9JKYtZPy5JKjm', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 17.24, "y": -918.1241}'}]
-900.8841[{'text': '\n\nTherefore, 17.24 - 918.1241 = -900.88', 'type': 'text', 'index': 0}]
> Finished chain.
While ChatLiteLLM version doesn't.
But with the changes in this PR, along with:
- https://github.com/langchain-ai/langchain/pull/23823
- https://github.com/BerriAI/litellm/pull/4554
The result is almost the same:
> Entering new AgentExecutor chain...
Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:
82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`
85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded:
So 3 + 5^2.743 = 85.66
To calculate 17.24 - 918.1241, we can use:
-900.8841
Therefore, 17.24 - 918.1241 = -900.88
> Finished chain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| langchain | ⬜️ Ignored (Inspect) | Visit Preview | Jul 30, 2024 3:25pm |
Hi @igor-drozdov, thanks for this.
Langchain has a suite of standard tests for chat models in the standard-tests library, which is already a test dependency of langchain-community. What do you think about adding these for LiteLLM?
Here is an example implementation for integration tests: https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/tests/integration_tests/test_standard.py
And unit tests: https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/tests/unit_tests/test_standard.py
We can add xfails (as in the example) if needed where tests are failing. But this will at least illuminate the failures and provide tests for the functionality implemented here.
Langchain has a suite of standard tests for chat models in the standard-tests library, which is already a test dependency of langchain-community. What do you think about adding these for LiteLLM?
@ccurme sounds good, thanks for the links!
We can add xfails (as in the example) if needed where tests are failing. But this will at least illuminate the failures and provide tests for the functionality implemented here.
Yes, I've added xfails to the failures that seem to be unrelated
Here is an example implementation for integration tests
The integration tests perform real requests, right? I specified ollama/mistral as the model, because it's the easiest option to test litellm locally
EDIT: the tests fail though because litellm package is not found, should it be included into pyproject as a dependency (at least as a dev dependency) or is there any other way to workaround it? 🤔
Deployment failed with the following error:
The provided GitHub repository does not contain the requested branch or commit reference. Please ensure the repository is not empty.
@ccurme thanks! The tests that were failing before now pass; however, I see a different failure: https://github.com/langchain-ai/langchain/actions/runs/9877887376/job/27280600253?pr=23906 and Python 3.12 jobs hang. I wonder if it's not related to the PR itself, but some cached actions setup. I can recreate the PR if necessary 🤔
@ccurme ah, I've noticed that extended_testing_deps.txt should be updated as well, I've added litellm there, but still got the import problem because import litellm raises:
ImportError: cannot import name 'model_validator' from 'pydantic'
It happened because pydantic v1 is used, but when I try updating to v2, there's a conflict with this library: https://pypi.org/project/javelin-sdk/0.2.5/#description
I decreased the version to the one that doesn't raise the import error and now the tests pass.
Thank you for your reviews! Could you please have another look?