[QUESTION] Making output parser and tools work together
I'm trying to Structured Output Parser work along side Calculator but there is no easy to implement this. When using a standard workflow with LLM Chain it is easy to just attach a model, prompt and structure the output using the parser but there is no way to attaching some logic such as Calculator to verify the result.
Vice versa, the Calculator can only work as a tool attached to an Agent but with no way of structuring the output of our agent as, say, JSON formatted response?
I'm sorry if this is a stupid suggestion / question since there is no clear documented way of doing this
I'm also struggling with this. Anyone has an advice?
Does anyone know how to solve this? @janpopovic @staticdreams
I want to be able to feed the output of any Runnable / Chain / Agent into an output parser
@staticdreams , @janpopovic , @ushyDev , @dentro-innovation
I'm also trying to find a way to make the Structured Output Parser and the tools work together. Have any of you made any progress on this?
looks a real problem, I think this is very common in applications as tools are always use, and with structured output is much easier to be used in the following processing unit. ... anyone has a solution?
Have you guys tried using a tool-node from sequential agents and a custom function? You could prototype your idea by importing the required langchain components into your custom function node and use it programmatically on your tool node's output. It's just an idea. I haven't tried this.