[FEAT] Models is always loaded in vram
Is this a new feature request?
- [x] I have searched the existing issues
Wanted change
Save GPU VRAM when not in use. VRAM is quite valuable resource and should be possible to configure a keep_alive value. For example with Ollama it is configured like this:
keep_alive=-1keeps model in memory indefinitelykeep_alive=0unloads model after each usekeep_alive=60keeps the model in memory for 1 minute after use
This can be a environment variable, default to -1 to not be a breaking change for anyone.
Reason for change
Right now the model is loaded into memory as soon as the container starts, and stays there even when not in use.
Proposed code change
No response
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.
This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.
@ecker00 What I do for my infra is to stop the container to free the VRAM using a service like sablier can help you do it automatically.
This is my home assistant voice, so I kind of need it available at all times, but I don't mind waiting a few seconds for the model to load on first wake up after being inactive.
This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.
This issue is locked due to inactivity