torchchat icon indicating copy to clipboard operation
torchchat copied to clipboard

[Feature request] Add support for LoRA adapter weights

Open chauhang opened this issue 10 months ago • 1 comments

For task specific domain adaption support for LoRA weights is needed for a variety of use cases for LLM and Diffusion models:

  1. On mobile where base foundation model will be preloaded on the device, provide option for each application to dynamically swap in/out LoRA weights corresponding to a task -- like text summarization, sentiment analysis, language translation, image generation based on artistic preference selected (eg animation images)...
  2. For laptops/desktops AI Copilot scenario, with base foundation model preloaded, based on context of each application be able to perform different tasks using the LoRA adapter weights similar to mobile

Low latency for swapping of the adapter weights is a key factor for above use cases. Recompiling entire model again is not a practical option due to the latencies involved.

chauhang avatar Apr 06 '24 05:04 chauhang