gemma-2B-10M
gemma-2B-10M copied to clipboard
LoRA fine tuning code ?
Hi
Can this be finetuned with LoRA without any additional script. Also, during finetuning, if we take sequence length of 512 or 1k, will it affect the inference for higher context length of say 16k or 32k ?