LLMCompiler icon indicating copy to clipboard operation
LLMCompiler copied to clipboard

Questions about LLMCompiler

Open RobinQu opened this issue 1 month ago • 0 comments

Hi, team.

I am developing a brand new agent framework instinct.cpp. In latest version, I tried to implement parallel function calling with the idea from your paper. And it actually works well.

After checking some bad cases, although no further experiments are done due my limited energy, some concerns are raised:

  1. LLMCompiler requires LLM to have exceptional reasoning and instruct following capabilities at least on par with gpt-3.5-turbo, or it many be almost unusable. And to have such traits, more often, 70B models seem to be a must.
  2. Replan seems to be unreliable. In original paper, the efficiency of re-planing is not discussed in details. In my experiments, if the model failed to produce good plan in first plan, it's unlikely it would have better result in second round.
  3. In the process of dependency resolution, joiner plays an important role to format former answers to an entity of single word. This simplifies the argument substitution for downstream function calls which depend on those results, but it has many limitations. For example:
What's the result of temperature in New York yesterday raised to power of two? 

Planner would give a task graph similar to this one, with tools of web search and a math calculator:

1. search("temperature in New York yesterday")
2. math("$1 ^ 2")
3. join()

While first call will succed with result like 21°C, the second call would fail with a math expression evaluator as it's undefined behavior to calculate power of two with21°C.

So here are my questions:

  1. What opensource models are recommending for tool agent with LLMCompiler?
  2. What could be future improvement about replan and join?

RobinQu avatar May 20 '24 06:05 RobinQu