Benjamin Bossan

Results 791 comments of Benjamin Bossan

Hmm, this is hard to determine as you're using a custom dataset. Just to be sure, without PEFT (i.e. when trying full finet-tuning), the error does not occur, right? >...

Could you please provide the code that loads the base model and then applies the PEFT model on top?

Thanks for the additional details. I could reproduce the error using the model `THUDM/glm-4-9b-chat`. The issue is that this model uses custom code, which is not compatible with `PromptEncoder`. As...

> if this line should change? > outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=padded_labels) No, this line can stay as is. PEFT will handle the extension of the embeddings internally.

> When I change this code, it always automatically reverts from hugging face to before the change You mean the code in `modeling_chatglm.py`? Maybe there is a logic to check...

I cannot reproduce this. Since I don't know what data you used, I'm using some dummy data: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from peft import get_peft_model, PromptEncoderReparameterizationType,...

> my data like this: > with O(O is the labbel) Sorry, I don't understand this.

I got your code working using `THUDM/glm-4-9b-chat` and the data you attached earlier. In addition to the code change discussed above, I had to change these lines: https://huggingface.co/THUDM/glm-4-9b-chat/blob/c24133cef34ff7a7010f1e97c113effdead0966b/modeling_chatglm.py#L880-L882 ```python if...

Thanks for providing more context @bghira. I wrote the code required to test this change: ```python @parameterized.expand([IA3Config, LoHaConfig, LoKrConfig, LoraConfig, HRAConfig, BoneConfig]) def test_add_weighted_adapter_cat_with_rank_pattern(self, config_cls): # Fixes a bug described...

_not stale, waiting for #2458 to be merged_