litellm
litellm copied to clipboard
Solving the return value format issue during multiple function calls with the LLaMA 3 model.
Title
Solving the return value format issue during multiple function calls with the LLaMA 3 model.
Type
🐛 Bug Fix
Changes
[REQUIRED] Testing - Attach a screenshot of any new tests passing locall
If UI changes, send a screenshot/GIF of working UI fixes
Case:
from typing import Annotated, Literal
import autogen
import config
Operator = Literal["+", "-", "*", "/"]
def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int:
if operator == "+":
return a + b
elif operator == "-":
return a - b
elif operator == "*":
return a * b
elif operator == "/":
return int(a / b)
else:
raise ValueError("Invalid operator")
assistant = autogen.AssistantAgent(
name="Assistant",
system_message="You are a helpful AI assistant. "
"You can help with simple calculations. "
"You should execute function over and over again instead of calling it all together."
"Return 'TERMINATE' and result when the task is done.",
llm_config={"config_list": config.config_list, "cache_seed": None},
)
# The user proxy agent is used for interacting with the assistant agent
# and executes tool calls.
user_proxy = autogen.UserProxyAgent(
name="User",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
# max_consecutive_auto_reply=1,
code_execution_config={"use_docker": False}
)
# Register the tool signature with the assistant agent.
assistant.register_for_llm(name="calculator",description="simple calculator tool")(calculator)
# Register the tool function with the user proxy agent.
user_proxy.register_for_execution(name="calculator")(calculator)
chat_result = user_proxy.initiate_chat(assistant, message="What is (1423 - 123) / 3 +2 ?")
prompt
### System:
You are a helpful AI assistant. You can help with simple calculations. You should execute function over and over again instead of calling it all together.Return 'TERMINATE' and result when the task is done. Produce JSON OUTPUT ONLY! Adhere to this format {"name": "function_name", "arguments":{"argument_name": "argument_value"}} The following functions are available to you:
{'type': 'function', 'function': {'description': 'simple calculator tool', 'name': 'calculator', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'integer', 'description': 'a'}, 'b': {'type': 'integer', 'description': 'b'}, 'operator': {'enum': ['+', '-', '*', '/'], 'type': 'string', 'description': 'operator'}}, 'required': ['a', 'b', 'operator']}}}
### User:
What is (1423 - 123) / 3 +2 ?
### Assistant:
Tool Calls: [
{
"id": "call_be95d6c1-38ee-444b-9aa4-40cd12b68659",
"type": "function",
"function": {
"name": "calculator",
"arguments": {
"a": 1423,
"b": 123,
"operator": "-"
}
}
}
]
### User:
1300
response_json
{'model': 'llama3:70b', 'created_at': '2024-07-10T09:21:13.92652752Z', 'response': '{\n "id": "call_8f2e33c4-f7d5-4aa6-b43d-93a14febf3ec",\n "type": "function",\n "function": {\n "name": "calculator",\n "arguments": {\n "a": 1300,\n "b": 3,\n "operator": "/"\n }\n }\n}\n\n', 'done': True, 'done_reason': 'stop', 'context': [128006, 882, 128007, 271, 14711, 744, 512, 2675, 527, 264, 11190, 15592, 18328, 13, 1472, 649, 1520, 449, 4382, 29217, 13, 1472, 1288, 9203, 734, 927, 323, 927, 1578, 4619, 315, 8260, 433, 682, 3871, 47450, 364, 4292, 16818, 2390, 6, 323, 1121, 994, 279, 3465, 374, 2884, 13, 87988, 4823, 32090, 27785, 0, 2467, 6881, 311, 420, 3645, 5324, 609, 794, 330, 1723, 1292, 498, 330, 16774, 23118, 14819, 1292, 794, 330, 14819, 3220, 32075, 578, 2768, 5865, 527, 2561, 311, 499, 512, 13922, 1337, 1232, 364, 1723, 518, 364, 1723, 1232, 5473, 4789, 1232, 364, 23796, 31052, 5507, 518, 364, 609, 1232, 364, 89921, 518, 364, 14105, 1232, 5473, 1337, 1232, 364, 1735, 518, 364, 13495, 1232, 5473, 64, 1232, 5473, 1337, 1232, 364, 11924, 518, 364, 4789, 1232, 364, 64, 25762, 364, 65, 1232, 5473, 1337, 1232, 364, 11924, 518, 364, 4789, 1232, 364, 65, 25762, 364, 8043, 1232, 5473, 9195, 1232, 2570, 61106, 51449, 81274, 3434, 4181, 364, 1337, 1232, 364, 928, 518, 364, 4789, 1232, 364, 8043, 8439, 2186, 364, 6413, 1232, 2570, 64, 518, 364, 65, 518, 364, 8043, 663, 3500, 3818, 14711, 2724, 512, 3923, 374, 320, 10239, 18, 482, 220, 4513, 8, 611, 220, 18, 489, 17, 24688, 14711, 22103, 512, 7896, 41227, 25, 2330, 220, 341, 262, 330, 307, 794, 330, 6797, 21960, 2721, 67, 21, 66, 16, 12, 1987, 2176, 12, 14870, 65, 12, 24, 5418, 19, 12, 1272, 4484, 717, 65, 22347, 2946, 761, 262, 330, 1337, 794, 330, 1723, 761, 262, 330, 1723, 794, 341, 415, 330, 609, 794, 330, 89921, 761, 415, 330, 16774, 794, 341, 286, 330, 64, 794, 220, 10239, 18, 345, 286, 330, 65, 794, 220, 4513, 345, 286, 330, 8043, 794, 68873, 415, 457, 262, 457, 220, 457, 2595, 14711, 2724, 512, 5894, 15, 271, 128009, 128006, 78191, 128007, 271, 517, 220, 330, 307, 794, 330, 6797, 62, 23, 69, 17, 68, 1644, 66, 19, 2269, 22, 67, 20, 12, 19, 5418, 21, 1481, 3391, 67, 12, 6365, 64, 975, 1897, 13536, 18, 762, 761, 220, 330, 1337, 794, 330, 1723, 761, 220, 330, 1723, 794, 341, 262, 330, 609, 794, 330, 89921, 761, 262, 330, 16774, 794, 341, 415, 330, 64, 794, 220, 5894, 15, 345, 415, 330, 65, 794, 220, 18, 345, 415, 330, 8043, 794, 81555, 262, 457, 220, 457, 633, 128009], 'total_duration': 62771065501, 'load_duration': 3401374, 'prompt_eval_count': 308, 'prompt_eval_duration': 8843197000, 'eval_count': 85, 'eval_duration': 53921702000}
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| litellm | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Jul 12, 2024 2:05am |
hey @silence-coding can you explain the issue a bit more - not clear to me from the shared example
hey @silence-coding can you explain the issue a bit more - not clear to me from the shared example
The issue arises because llama3 is influenced by interaction information during inference, as illustrated by the example I provided. The originally expected response format is {"response": {"name": "", "arguments": ""}}, but it may actually turn into {"response": {"function": {"name": "", "arguments": ""}}}.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 2 committers have signed the CLA.
:x: p00512853
:x: silence-coding
p00512853 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.