langchain: Non-dict inputs in LCEL chains break the instrumentation
Summary of problem
When using a multi-step LCEL chain with dd-trace active, the langchain patch runs traced_lcel_runnable_sequence (and its async version) to instrument the call. This function tries to set a tag for every input, as seen in this code bit:
try:
inputs = get_argument_value(args, kwargs, 0, "input")
except ArgumentError:
inputs = get_argument_value(args, kwargs, 0, "inputs")
if integration.is_pc_sampled_span(span):
if not isinstance(inputs, list):
inputs = [inputs]
for idx, inp in enumerate(inputs):
if not isinstance(inp, dict):
span.set_tag_str("langchain.request.inputs.%d" % idx, integration.trunc(str(inp)))
else:
for k, v in inp.items():
span.set_tag_str("langchain.request.inputs.%d.%s" % (idx, k), integration.trunc(str(v)))
The problem is that input might be a single, non-dict-like object. This happens, for example, when using langchain's Pydantic parser as the output to a chain step and pipe it into a further PromptTemplate. We've reproduced this with the create_structured_output_runnable call, but should happen with any chain that has a dict input. Throws an error when trying to access inp.items().
Which version of dd-trace-py are you using?
ddtrace = "^2.9.2"
Which version of pip are you using?
Using poetry, but irrelevant to this issue
Which libraries and their versions are you using?
`pip freeze`
python = "^3.11" pudb = "^2024.1" grpcio = "^1.62.1" ipython = "^8.22.1" environs = "^11.0.0" grpcio-tools = "^1.62.1" langchain-mistralai = "^0.0.5" langchain-openai = "^0.0.8" pydantic-settings = "^2.2.1" grpcio-reflection = "^1.62.1" langchain-community = "^0.0.29" torch = [ { url = "https://download.pytorch.org/whl/cpu/torch-2.1.1%2Bcpu-cp311-cp311-linux_x86_64.whl", markers = "sys_platform == 'linux' and platform_machine == 'x86_64'" }, { url = "https://download.pytorch.org/whl/cpu/torch-2.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", markers = "sys_platform == 'linux' and platform_machine == 'aarch64'" }, { url = "https://download.pytorch.org/whl/cpu/torch-2.1.1-cp311-none-macosx_11_0_arm64.whl", markers = "sys_platform == 'darwin' and platform_machine == 'arm64'" }, ] sentence-transformers = "^2.5.1" psycopg2-binary = "^2.9.9" pgvector = "^0.2.5" langchain = "^0.1.13" pypdf = "^4.1.0" alembic = "^1.13.1" openai = "^1.32.0" ddtrace = "^2.9.2"How can we reproduce your problem?
This happens, for example, when using langchain's Pydantic parser as the output to a chain step and pipe it into a further PromptTemplate. We've reproduced this with the create_structured_output_runnable call, but should happen with any chain that has a non-dict input. Throws an error when trying to access inp.items().
What is the result that you get?
Attribute error when calling chain.invoke().
In my case, the Pydantic object is trying to be accessed as a dict:
builtins.AttributeError: 'CreateTaskResponse' object has no attribute 'items'
What is the result that you expected?
No exception raised
Submitted a fix proposal in #9706
Hi @Towerthousand! Thank you for raising this issue, and opening a PR! We're always looking to improve our langchain integration, especially as new use cases come up.
With that in mind, can you try providing a small code snippet that reproduces this issue? I see you've provided some references to the Pydantic parser, as well as create_structured_output_runnable (which I've noticed is deprecated), but so far haven't been able to reproduce this myself. This would help us add a good regression test as well for your PR, but also inform us a bit more about your use case. Thanks!
Also: I didn't see langchain in your pip freeze output. If you could supply the versions of langchain, langchain-community, etc. (the whole langchain suite, if possible) that you're using, that'd be awesome!
Sorry, sent the wrong deps, fixed! We're in the process of migrating to langchain 0.2, so I'll write a regression test when we do and I confirm it happens without the deprecated function. I agree the PR should include it :)
Maybe dumb question, but how can I easily run the langchain test folder tests?
Thanks for the super fast reply!
No worries at all! And definitely not a dumb question - we have some steps here outlining how to run these tests locally. I would make a venv and run
pip install riot
pip install -e .
To install riot and a local version of the tracer with your changes, before running the test command in that doc. You don't need to start any Docker services for langchain.
However, I'm more than happy to take on writing this test myself, I would just need a repro if you have one on hand you're OK with sharing! In that case, I'll open a new PR and cherry pick your commit, since it'll be easier to run our CI that way too.
Hi @Towerthousand! Just wanted to update you here that we were able to write a small reproduction for this. I've opened up #9747 with your fix cherry-picked, with a small test added on. Thanks for bringing this issue to our attention!