openinference icon indicating copy to clipboard operation
openinference copied to clipboard

[bug] Langchain ToolRuntime information present in tool UI makes it hard to read

Open uu9 opened this issue 1 month ago • 6 comments

Describe the bug ToolRuntime information is too much to read result.

Screenshots Image

To Reproduce example: exchange_rate_tool.py

# /// script
# dependencies = [
#   "langchain>=1.0.0"
# ]
# ///
import requests
from langchain import agents
from langchain.tools import tool, ToolRuntime
from langchain_openai import ChatOpenAI
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

from openinference.instrumentation.langchain import LangChainInstrumentor

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)


@tool
def get_exchange_rate(
    runtime: ToolRuntime,
    currency_from: str = "USD",
    currency_to: str = "EUR",
    currency_date: str = "latest"
):
    """Retrieves the exchange rate between two currencies on a specified date."""
    return requests.get(
        f"https://api.frankfurter.app/{currency_date}",
        params={"from": currency_from, "to": currency_to},
    ).json()


tools = [get_exchange_rate]
llm = ChatOpenAI()
agent = agents.create_agent(llm, tools)

if __name__ == "__main__":
    agent.invoke(
        {
            "messages": [
                {
                    "role": "user",
                    "content": "What is the exchange rate from US dollars to Swedish "
                    "currency today?",
                }
            ]
        }
    )

Expected behavior A way to allow user control if ToolRuntime information should present in result.

Desktop (please complete the following information): langchain==1.0.0 openinference-instrumentation-langchain==0.1.54

uu9 avatar Oct 27 '25 05:10 uu9

Hi @uu9! I'm Dosu and I’m helping the openinference team.

There isn't currently a built-in way to control or suppress ToolRuntime information in tool results when using openinference-instrumentation-langchain. The instrumentation captures and reports the raw tool response and runtime metadata as-is, which can lead to verbose outputs in the UI. This is a known limitation and has been discussed in related issues—maintainers have confirmed that the instrumentation does not filter or modify the tool response before reporting it, and there are no configuration flags or environment variables to hide ToolRuntime details in outputs [source].

If you need to filter or redact ToolRuntime metadata before it reaches your observability backend, you can add a custom OpenTelemetry SpanProcessor to your tracing pipeline. This allows you to programmatically filter or modify span attributes (including ToolRuntime data) before export, but it requires custom code and isn't a built-in feature [source].

Some users have requested UI-level improvements to selectively render only relevant parts of tool responses, but these are not yet available as configuration options in the instrumentation or UI [source].

If you want an example of a custom SpanProcessor for filtering, let me know!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Oct 27 '25 06:10 dosubot[bot]

@dosu RunContextWrapper doesn't appear in result with OpenAI Agents SDK(openinference-instrumentation-openai-agents). What's the example of customizing SpanProcessor?

uu9 avatar Oct 27 '25 06:10 uu9

The reason you don't see RunContextWrapper in results with OpenAI Agents SDK (openinference-instrumentation-openai-agents) is that its instrumentation captures and reports different runtime metadata than LangChain's. OpenAI Agents SDK does not serialize or emit RunContextWrapper in its span attributes by default—this is an implementation difference between the two integrations [source].

To customize what gets exported (for example, to filter or redact verbose metadata like ToolRuntime), you can implement a custom OpenTelemetry SpanProcessor. Here’s a Python example using a conditional processor to filter spans before export:

from opentelemetry.sdk.trace.export import SpanProcessor, SpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

class ConditionalSpanProcessor(SpanProcessor):
    def __init__(self, exporter: SpanExporter, condition: callable):
        self.exporter = exporter
        self.condition = condition

    def on_start(self, span, parent_context):
        pass

    def on_end(self, span):
        # Only export spans that meet the condition
        if self.condition(span):
            self.exporter.export([span])

    def shutdown(self):
        self.exporter.shutdown()

    def force_flush(self, timeout_millis=None):
        self.exporter.force_flush(timeout_millis)

# Example: filter out spans with unwanted metadata
def filter_toolruntime(span):
    # Example: skip spans with 'toolruntime' in their name or attributes
    return "toolruntime" not in span.name and "toolruntime" not in span.attributes

tracer_provider = TracerProvider()
otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:6006/v1/traces")
tracer_provider.add_span_processor(ConditionalSpanProcessor(otlp_exporter, filter_toolruntime))

You can adapt the filter_toolruntime function to match your filtering needs (e.g., by inspecting span attributes for specific keys or values) [source].

For JavaScript/TypeScript, you can pass a custom tracer provider with your own span processor to OpenAI instrumentation as shown in the docs [source].

Let me know if you want a more advanced example for redacting or modifying attributes instead of filtering whole spans!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Oct 27 '25 06:10 dosubot[bot]

I did some research and found out why. Input comes from Langchain itself. OpenAI Agents SDK doesn't include function context in function schema. So it won't appear in Input block while Langchain does the opposite.

https://github.com/openai/openai-agents-python/blob/main/src/agents/function_schema.py#L285

            if origin is RunContextWrapper or origin is ToolContext:
                takes_context = True  # Mark that the function takes context
            else:
                filtered_params.append((first_name, first_param))

Both are reasonable. But I think most users don't want a runtime part. Any simple solution on this?

Update: Langchain use str(...) to produce input_str with runtime informations, making it hard to hook any post-process. https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L799

        run_manager = callback_manager.on_tool_start(
            {"name": self.name, "description": self.description},
            tool_input if isinstance(tool_input, str) else str(tool_input),
            ...
        )

uu9 avatar Oct 27 '25 09:10 uu9

Here is my Any% way one line solution for this. Still looking for a better solution.

ToolRuntime.__repr__ = lambda self: "omitted"

uu9 avatar Oct 27 '25 11:10 uu9

hey @uu9, thanks for filing this issue and providing such a detailed reproduction example!

We appreciate you taking the time to dig into this and include the code snippet - it really helps us understand the problem. The ToolRuntime information cluttering the tool UI is definitely something we should address.

We are currently evaluating and revisiting our langchain autoinstrumentation post the langchain v1.0 release. We'll keep what's described in this ticket in mind as we work through those improvements, particularly around providing better control over what metadata gets surfaced in the UI.

Thanks again for bringing this to our attention!

caroger avatar Nov 14 '25 07:11 caroger