gpt-pilot
gpt-pilot copied to clipboard
[Bug]: Reprompting broken for local llm
Version
VisualStudio Code extension
Operating System
Windows 11
What happened?
When using Mac, Linux or Windows 11 with WSL2 Ubuntu, I get the following Bug.
Whenever a longer output is expected from an Agent, GPT-Pilot forces it to go way beyond its token limit. With Hermes-Mistral-7B-Pro for example Outputting:
},
"human_intervention_description": "Create a directory in root.",
},
},
]
}
}
"
"Tower
##
##
##
## 1/
## "Add "Rem
##
"Data
### "Redissh
## "L1
"Depar---designer
"and
"Designb1 0 proposed
##
Dynamic showed
"
"Current
"F###
"D
"##
####
##
## <dummy00014>w4 1
d ion
##
##
A------- - -F
##
Accid0 Custom
"
*/
## b B This
##
A
i have no idea how to fix this yet. Oobabooga for example is showing 17k tokens although .env says 8192. Same with LMStudio and Ollama
I have experimented a bit with context length.
Cranking up Alpha value seems to have helped a ton
Found the error in the log finally: 2024-03-30 19:54:10,788 [llm_connection.py:516 - stream_gpt_completion() ] ERROR: Unable to decode line: : ping - 2024-03-30 18:54:10.748186 Expecting value
issue solved, therefore closing
Also not solved, I fixed it locally and will push code changes later this week
sry for confusion, was tryin to bring order to the chaos. Waitin for your pull request :)