Dud

Results 5 comments of Dud

> I have to echo this sentiment. I appreciate the desire to release a quality dataset, but I don't follow the logic of withholding it until it's been "cleaned". Judging...

> Hello, during full finetuning, the embedding layer with additional tokens is also trained which is not the case when using PEFT LoRA as per the code you shared. I...

This solved my issue in https://github.com/huggingface/peft/issues/349#issue-1677573675

> Thank you @Splo2t for adding the support to have LoRA layers in Embedding modules hugs, this is really cool fire. Left a few suggestions. > > Are there evaluation...

> > Training a LoRA on a dataset with a strange use of tokens that the model likely didn't see during full training results in a high probability of a...