Roger Yang
Roger Yang
I mentioned it because I saw it in their type union [here](https://github.com/langchain-ai/langgraph/blob/9467a0e2bb5cf6ae196af286c589e31aa004ce79/libs/langgraph/langgraph/graph/_node.py#L76), which indicates that you could just declare it in the node function signature. I also tested it with...
Currently, trace IDs are designed to be randomly generated and unique for each trace, so there isn’t a straightforward way to pass one in manually.
Just to confirm: you added `config: RunnableConfig` to your node definitions and included it in `llm.ainvoke` like it's shown [here](https://github.com/Arize-ai/openinference/issues/2190#issuecomment-3286290249)—not via closure, but just changing the function signature and definition—and...
What happens if you don't add `config` to your tool definition? Only `ainvoke` needs `config=config`. The tool itself doesn't need it. ``` @tool async def knowledge_base_reference_tool( query: str, ) ->...
Have you tried adding `config: RunnableConfig` (making sure it's of the type `RunnableConfig` and not `Annotated[any, InjectedToolArg]`) directly without changing anything else? I don't know enough about how the rest...
Thanks for confirming! Async patterns can get tricky and introduce subtle issues, so explicitly passing the context is usually the safest and most robust choice.
@Jgilhuly You should actually pin the version for this purpose in a centralized location [here](https://github.com/Arize-ai/phoenix/blob/af9c3e9bf44484f2dbfe180363f7e2dbbc1495a7/pyproject.toml#L110). You would need to pin all its sibling packages there as well.
Thank you for the report. We'll investigate.
an [example](https://github.com/Arize-ai/openinference/blob/structured-outputs/python/openinference-semantic-conventions/examples/structured_outputs/vertexai.tools.json#L15) of what the [tools](https://github.com/Arize-ai/openinference/blob/2420464aa0b601e998e0b2601abd3cd4ffdcf3cf/python/openinference-semantic-conventions/examples/structured_outputs/vertexai.tools.py#L72-L73) can look like
> A bit confused on the motivation here - as end-to-end testing should be as "realistic" a setup as we can have. We don't actually use in-memory sqlite so this...