langchain icon indicating copy to clipboard operation
langchain copied to clipboard

OpenAPI planner agent doesn't support large specs

Open voluntadpear opened this issue 2 years ago • 1 comments

When replicating the hierarchical planning example with a large enough OpenAPI specification, the following error is thrown when running the agent with any query:

InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 6561 tokens. Please reduce the length of the messages.

Here is how I'm reducing my OpenAPI spec:

with open("server.yml") as f:
    raw_server_spec = yaml.load(f, Loader=yaml.Loader)
server_spec = reduce_openapi_spec(raw_server_spec)

And here's how I'm initializing the agent:

llm = ChatOpenAI(temperature=0.0)
openapi_agent = create_planner_openapi_agent(server_spec, requests_wrapper, llm)
user_query = "Return the response for retrieving document info for the document with id 1"
openapi_agent.run(user_query);

I think the OpenAPI spec reducer should have a way of splitting the spec into multiple chunks if necessary and the OpenAPI agent adapted to go across many chunks if needed, perhaps with a map-reduce or "stuff" approach.

voluntadpear avatar Apr 12 '23 18:04 voluntadpear

Also interesting in solution

borisko123 avatar Jun 13 '23 14:06 borisko123

Small update on this. OpenAI released a larger context gpt-3.5-turbo model with 16k context.

llm = ChatOpenAI(model="gpt-3.5-turbo-16k",openai_api_key=openai_api_key,temperature=0.0)

s04 avatar Jun 19 '23 02:06 s04

Hi, @voluntadpear! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue is related to the OpenAPI planner agent not supporting large OpenAPI specifications, which results in an error when running the agent with a query. The suggested solution is to split the spec into multiple chunks and adapt the agent to handle these chunks. It was also mentioned that using the larger context gpt-3.5-turbo model with 16k context, released by OpenAI, may be helpful in resolving the issue.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.

dosubot[bot] avatar Sep 21 '23 16:09 dosubot[bot]