text-generation-inference
text-generation-inference copied to clipboard
Dynamically serve LoRA modules
Feature request
Do you plan on integrating dynamic serving of LoRA modules, so that new modules can be added / removed during runtime instead of having to restart the engine and add the new modules to the LORA_ADAPTERS env variable?
Motivation
I am training multiple LoRA modules and want to serve them ASAP through my inference endpoint, without the need for manual restarting and adding the new modules there. An example of it would be to send a request to some load_lora endpoint with an url/path to the new module to add.
Your contribution
Could open up a PR