letta
letta copied to clipboard
Keyword argument is unexpected, but it should be.
Describe the bug Then the model replies with a JSON like this:
{
"function": "send_message",
"params": {
"inner_thoughts": "Responding to the user's login message by introducing myself.",
"message": "Hello, Chad! I'm MemGPT, your kind, thoughtful, and inquisitive companion. I'm here to chat and help you with your questions, thoughts, and ideas. Let's have a
great conversation together!"
}
}
MemGPT replies back:
{
"status": "Failed",
"message": "Error calling function send_message: send_message() got an unexpected keyword argument 'inner_thoughts'",
"time": "2024-02-16 09:21:31 PM JST+0900"
}
However the function descriptions says:
send_message:
description: Sends a message to the human user.
params:
inner_thoughts: Deep inner monologue private to you only.
message: Message contents. All unicode (including emojis) are supported.
And therefore the function call is correct and should be accepted.
Please describe your setup
- [ ] How did you install memgpt?
-
git clone
-
- [ ] Describe your setup
- Linux
- Terminal
Screenshots N/A
Additional context memgpt-prompt.txt
MemGPT Config
Please attach your ~/.memgpt/config
file or copy past it below.
config-redacted.txt
If you're not using OpenAI, please provide additional information on your local LLM setup:
Local LLM details
If you are trying to run MemGPT with local LLMs, please provide the following information:
- [ ] The exact model you're trying to use (e.g.
dolphin-2.1-mistral-7b.Q6_K.gguf
) miqudev/miqu-1-70b - [ ] The local LLM backend you are using (web UI? LM Studio?) web UI
- [ ] Your hardware for the local LLM backend (local computer? operating system? remote RunPod?) local Linux
Was this ever resolved? Running into the same issue.
@davidkundrats what model wrapper + model are you using? I can try and replicate the issue + create a patch.
Essentially some model wrappers will do prompt formatting / parsing in a way that injects inner_thoughts
into the function call (as opposed to it being outside the function call, e.g. as part of content
in an OpenAI style message).
The model wrappers that put inner_thoughts
into the function call also have to strip it back out of the function call before the function is handed off to the function executor.
So basically this error is happening because inner_thoughts
did not get "popped" off the kwargs after being injected into the functions. To show you the code, for the llama3
wrapper, this happens here: https://github.com/cpacker/MemGPT/blob/832e07d5bfd7687ccd6632ee3c911f042c657570/memgpt/local_llm/llm_chat_completion_wrappers/llama3.py#L283-L285
@cpacker i had made a bandaid fix that covers the inner_monologue issue by doing basically what you said. thanks for that reply - I had a hunch it was the model wrapper. for future reference it was azure's gpt-4 model.
while I have you, is there a way to see the uncompiled dev portal code somewhere? id like to make a few changes custom to my use case.