LoRA icon indicating copy to clipboard operation
LoRA copied to clipboard

Alternate implementation of Lora leveraging tensor subclasses and reparametrization.

Open Chillee opened this issue 1 year ago • 3 comments

I thought this might be interesting as an alternate implementation of LoRA leveraging tensor subclasses and reparametrization.

https://gist.github.com/Chillee/a8d2070b1b7b3f97d8c87bac3c366f8e

The main idea here is that we can leverage parametrization in order to transform our parameter in a manner that's composable with existing modules (i.e. we don't need to use a totally new layer).

Then, since LoRA also requires us to leverage special matrix structure for efficiency, we return a tensor subclass that has special handling when we encounter F.linear(x: Tensor, weight: LoraTensor, bias: Tensor). This tensor subclass composes with things like autograd and such, so we can still differentiate through our tensor.

Chillee avatar May 04 '23 00:05 Chillee