DeepResearch
DeepResearch copied to clipboard
Manage the vLLM lifecycle
This is a first step towards improving the usability and robustness of this repository. This change ensures we clean up all the vLLM instances when we exit:
- finished executing the task
- error occurs
- keyboard interruption
imo, this isn't the optimal solution. I personally would love to divide the LLM engine lifetime from the actual job processing but that is for a future PR.
closes #109