BambooAI
BambooAI copied to clipboard
No feeback for the user while correcting code
Problem
When code corrections are triggered, the user is left waiting without any feedback on CLI about current status of the process (image below).
Solution
Output from the LLMs while correcting the code in between corrections would prevent the user from thinking the process has halted or crashed (green region in image below).
This is particularly troublesome doing inference on slow setups (such as local LLMs on laptops, like Llama3 8b).
Good point. What it does during that gap is developing a new version of the code, incorporating the fix. We can easily enable a stream to terminal by just changing the line 510 in bambooai.py
module to llm_response = self.llm_stream(self.log_and_call_manager,code_messages,agent=agent, chain_id=self.chain_id)
, but it will make the terminal window really busy/clattered. I will try to think of something.