text-generation-inference icon indicating copy to clipboard operation
text-generation-inference copied to clipboard

Dynamically serve LoRA modules

Open rikardradovac opened this issue 10 months ago • 1 comments

Feature request

Do you plan on integrating dynamic serving of LoRA modules, so that new modules can be added / removed during runtime instead of having to restart the engine and add the new modules to the LORA_ADAPTERS env variable?

Motivation

I am training multiple LoRA modules and want to serve them ASAP through my inference endpoint, without the need for manual restarting and adding the new modules there. An example of it would be to send a request to some load_lora endpoint with an url/path to the new module to add.

Your contribution

Could open up a PR

rikardradovac avatar Dec 20 '24 09:12 rikardradovac