nunomlucio
Results
2
comments of
nunomlucio
Same here, the error message in LM Studio is: " Client disconnected. Stopping generation... (if the model is busy processing the prompt, it will finish first))" As a result privategpt...
> I had to increase timeout to 300 in llm_component.py file. I was using ollama. It resolved the problem for me, ollama_settings = settings.ollama self.llm = Ollama( model=ollama_settings.llm_model, base_url=ollama_settings.api_base, request_timeout=300...