llm-workflow-engine
                                
                                
                                
                                    llm-workflow-engine copied to clipboard
                            
                            
                            
                        Freezes at end of responses
Sometimes the Chatgpt program seemingly never gets the memo that ChatGPT has completed its response. If streaming is enabled, this results in the entire response being sent to the screen, but then no prompt for another input. If streaming is disabled, since the program never finds out the response is complete, it never prints it to the screen. In either case, if logging is enabled, the response never gets logged. The only thing you can do is to hit Ctrl+C to terminate the program. There should probably be a timeout similar to the "no response at all" timeout, triggered after a certain time with no data from ChatGPT after starting to receive a valid response. When this timeout triggers, the program should assume it's received all it's going to receive, and handle the situation as if it had received a complete response E.G. log it, render it if streaming is disabled, provide the next input prompt, etc.
This is a sporadic problem that we can't really do much about. I added a timeout in my branch that raises a new exception when this happens. It's best to start over with a new session when this starts to happen IME.
Closing due to inactivity, and I haven't seen this issue recently.