Benjamin Bossan
Benjamin Bossan
Thanks for the pointer (here is the [permalink](https://github.com/stanfordnlp/pyvene/blob/ffad51a169d0dc76f020da0d62fc14ee82732746/pyvene/models/intervenable_base.py#L963) just in case). Let's wait for @raven38's opinion is on this.
@pacman100 Thanks for the review, I have addressed your comments - Fixed the typo - Also cast bfloat16 - Also cast for other tuners, not just LoRA (i.e. LoHa, OFT,...
I don't have much experience with webui. Did you try running this script? https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/convert_peft_sd_lora_to_kohya_ss.py
_Status: This is ready to be merged after the PEFT v0.11 release_
Could you please show the code to reproduce this?
When you load the base model before merging, i.e. here: ```python peft_config = PeftConfig.from_pretrained(args.lora_model_name_or_path) base_model = AutoModelForCausalLM.from_pretrained( args.base_model_name_or_path if args.base_model_name_or_path else peft_config.base_model_name_or_path, ) ``` you are not passing `torch_dtype=torch.bfloat16` as...
*not stale*
_not stale_
@younesbelkada Could I address all your concerns? I pinged the user who wanted to test it on their case. When it comes to docs, I didn't really find a place...
Note: The test `test_merge_layers` for MHA fails. This is most likely because of an existing bug in how merging is implemented, see PR #1355. Once that is merged, the test...