vllm
vllm copied to clipboard
unload the model
Hi, i m sorry, i don't find how unload model. like i load a model, i delete the object and i call the garbage collector but it does nothing. How we are suppose to unload model? I want to load a model do a batch, load an other do a batch, like that for multiple models for comparing them. But for now i must stop python each time.
Try calling torch.cuda.empty_cache() after you delete the LLM object
You can also use gc.collect() to remove *garbage* objects immediately, after you delete them.
both doesn't work.
You should also clean Notebook output: https://stackoverflow.com/questions/24816237/ipython-notebook-clear-cell-output-in-code
i always do (In the GUI not in my cells)
this seems mostly solved by #1908 with
import gc
import torch
from vllm import LLM, SamplingParams
from vllm.model_executor.parallel_utils.parallel_state import destroy_model_parallel
# Load the model via vLLM
llm = LLM(model=model_name, download_dir=saver_dir, tensor_parallel_size=num_gpus, gpu_memory_utilization=0.70)
# Delete the llm object and free the memory
destroy_model_parallel()
del llm.llm_engine.driver_worker
del llm
gc.collect()
torch.cuda.empty_cache()
torch.distributed.destroy_process_group()
print("Successfully delete the llm pipeline and free the GPU memory!")
this seems mostly solved by #1908 with
import gc import torch from vllm import LLM, SamplingParams from vllm.model_executor.parallel_utils.parallel_state import destroy_model_parallel # Load the model via vLLM llm = LLM(model=model_name, download_dir=saver_dir, tensor_parallel_size=num_gpus, gpu_memory_utilization=0.70) # Delete the llm object and free the memory destroy_model_parallel() del llm.llm_engine.driver_worker del llm gc.collect() torch.cuda.empty_cache() torch.distributed.destroy_process_group() print("Successfully delete the llm pipeline and free the GPU memory!")
i had already read that. My problem stay unsolved when i use the Vllm from llamaindex otherwise it almost works. I've a little of memory that stay used (~1GB) but at least i can load and unload the models. the problem is that i don't find how access to the member llm_engine of Vllm.LLM
@chenxu2048 the notebook output is just computed data shown to the user, the Python kernel computes it but it's a one-way communication - the output doesn't affect the kernel at all. Therefore clearing the output will have no effect on GPU memory or any other state of the kernel.
No resulute answer given. Can be model unload from gpu ram with vllm? Yes or no
No resulute answer given. Can be model unload from gpu ram with vllm? Yes or no
Worst case scenario: use notebook magic %write to write the script in a python file, and run the python file within the notebook. When vllm finish run the memory will be recollected
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
can this be done by using a http api or a simple timeout for no requests?
This just never seems to work - I'm sometimes not even able to terminate the main process if I need to as the memory is never cleaned.
can this be done by using a http api or a simple timeout for no requests?
+1 for an API.
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!