r0

Results 21 comments of r0

Do you think #536 would help here?

Hi @sanchit-gandhi and @connor-henderson I saw the PR, but I was wondering if we also integrated `always_use_initial_prompt` and `condition_on_previous_text` to the API? If no, is there any active work going...

Okay, in case someone is not able to find it, you'll have to manually download encoded weights from torch hub: ```python import torch url = 'https://dl.fbaipublicfiles.com/encodec/v0/encodec_24khz-d7cc33bc.th' state = torch.hub.load_state_dict_from_url(url, map_location='cpu',...

@imohitmayank I would also suggest to add `ensure_weight_tying` flag as True in `LoraConfig` if you add the embedding layer in `modules_to_save`. This would keep the weight tying consistent and mark...

@imohitmayank Can you try `ensure_weight_tying` flag with `modules_to_save`? Instead of passing `trainable_tokens`, can you please try passing `embed_tokens` layer to `modules_to_save`?

@imohitmayank Yes, you are correct. I am not sure what should be done here from PEFT side. @BenjaminBossan would be the correct person for that. But as far as we...

@BenjaminBossan I have added the relevant test cases and implemented the `ensure_weight_tying` flag for `target_modules`. The current implementation works only if `embed_tokens` is added and not if `lm_head` is added....

@BenjaminBossan This is now ready for review. I have also updated the logic for tied layers in `modules_to_save` so that `lm_head` and `[embed_tokens, lm_head]` cases are supported. Earlier, they would...

@BenjaminBossan I have addressed your comments. PTAL