FEAT Add sine-LoRA #2434
Hi,
I just implemented sine LoRA in variants.py and changed "resolve_lora_variant" a bit in layers.py.
@githubnemo Hi, I am still a bit confused how to add module.sinelora_frequency = kwargs['sinelora_frequency'] and sinelora_scaling. It returns key error when I put module.sinelora_frequency = kwargs['sinelora_frequency'] in class SineLoraLinearVariant(LoraVariant):
Hi @yipingji
Sorry for being unclear. I tried highlighting the issue in https://github.com/huggingface/peft/pull/2457#discussion_r2020957043. The kwargs come from the update_layer calls, the update_layer calls get their parameters from the layer __init__ calls. As long as they don't have sinelora_frequency and sinelora_scaling they will be missing from the kwargs in SineLora*Variant.init as well.
So you have to update the following functions with new parameters the same way you did to add the use_sinelora parameter:
LoraLayer.update_layerLinear.__init__Linear.update_layerEmbedding.__init__Embedding.update_layer
For example, the change for LoraLayer and Linear could look like this:
diff --git a/src/peft/tuners/lora/layer.py b/src/peft/tuners/lora/layer.py
index cb09608..efd8d6a 100644
--- a/src/peft/tuners/lora/layer.py
+++ b/src/peft/tuners/lora/layer.py
@@ -183,6 +183,8 @@ class LoraLayer(BaseTunerLayer):
use_rslora,
use_dora: bool = False,
use_sinelora: bool = False,
+ sinelora_frequency = 200.0,
+ sinelora_scaling = None,
lora_bias: bool = False,
):
# collect the kwargs
@@ -574,6 +576,8 @@ class Linear(nn.Module, LoraLayer):
use_dora: bool = False,
lora_bias: bool = False,
use_sinelora: bool = False,
+ sinelora_frequency: float = 200.0,
+ sinelora_scaling: Optional[float] = None,
**kwargs,
) -> None:
super().__init__()
@@ -591,6 +595,8 @@ class Linear(nn.Module, LoraLayer):
use_dora=use_dora,
lora_bias=lora_bias,
use_sinelora=use_sinelora,
+ sinelora_frequency=sinelora_frequency,
+ sinelora_scaling=sinelora_scaling,
)
self.is_target_conv_1d_layer = is_target_conv_1d_layer
I hope that helps!
To test the Embedding layer implementation you could add
("Embedding + transformers Conv1D 2 LoRA + SineLoRA", "EmbConv1D", LoraConfig, {"target_modules": ["emb"], "use_sinelora": True}),
to the custom models testcase you already added.
Hi @githubnemo, apologies for the previous update and it is my first time to PR in such a large codebase. I was just wondering if my current one is correct.
@yipingji gentle ping :)
@yipingji gentle ping :)
Sorry for the late updates. I am working on conference recently and will PR soon;)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Hi @githubnemo ,
How can i reopen the PR
@yipingji I've reopened the PR and added the WIP tag so stale bot will not bother us.
@githubnemo I just updated and please have a look;)
gentle ping @githubnemo ;)
Hey @yipingji, do you still plan on implementing this further?
Yes I plan to implement this further but I’m quite busy with my project recently and I will pr next month:)
On Wed, Jul 30, 2025 at 11:52 PM githubnemo @.***> wrote:
githubnemo left a comment (huggingface/peft#2457) https://github.com/huggingface/peft/pull/2457#issuecomment-3136584942
Hey @yipingji https://github.com/yipingji, do you still plan on implementing this further?
— Reply to this email directly, view it on GitHub https://github.com/huggingface/peft/pull/2457#issuecomment-3136584942, or unsubscribe https://github.com/notifications/unsubscribe-auth/BI7NMWDNKHD3CFRDVWZVPJT3LDII3AVCNFSM6AAAAABZ4FA5C2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTCMZWGU4DIOJUGI . You are receiving this because you were mentioned.Message ID: @.***>