PEViT
PEViT copied to clipboard
Question about LoRA alpha
Hi, thanks for your great work. I noticed that in your scripts, you hard-coded the lora alpha to be 128 and the rank r to be 4 (therefore leading to a scaling factor of 32): https://github.com/eric-ai-lab/PEViT/blob/be6fb43ff54adeeffe720c663dd238976070558e/vision_benchmark/evaluation/lora_model.py#L455-L463 Was there a principled justification for these choices? I am just wondering if you did any tuning on these values to suggest what would be good values to use.
Hi, thanks for the interests! The setting is inherited from LoRA's official development code: https://github.com/microsoft/LoRA/tree/snapshot-9-15-2021 https://github.com/microsoft/LoRA/blob/snapshot-9-15-2021/src/model.py