Benjamin Bossan

Results 584 comments of Benjamin Bossan

@merveenoyan Re-running solved the codecov issue. Not sure why codecov hates you specifically :D Could you please merge the current main branch, as have now added CI for Python 3.11?...

@merveenoyan Some tests are failing. This is mostly fixed after merging with current main branch. Could you please do that?

> I realized pytest doesn't collect tests of parser for some reason on mac (assuming it's same on windows) (that's why nothing failed on my local, I guess) Probably it's...

Quick update, the PR #578 implements this and is quite advanced, will probably land soon.

> In the meanwhile, let me know what I should consider before making a PR. We just added a [contributing guide](https://huggingface.co/docs/peft/developer_guides/contributing), maybe that can be helpful.

@deema-A Could you please provide the code that results in this error? Also, please always paste the full error message. Otherwise, it will be very hard to figure out the...

> i add wte in target_modules, the embeddings layer will also train in peft? The embeddings themselves will stay frozen, but the LoRA weights for the embeddings will be updated....

> Any updates on this issue? Still seeing this bug Which one exactly do you mean? Note that for the case of loading the model in float16, you have to...

> I am confused about how to understand the relation between, torch_dtype, fp16, modules_to_save? In general, when you want to use mixed precision (i.e. `fp16=True`), the weights to be trained...

> What is the default weights type for lora layers? In general, the LoRA layers use the same dtype for their parameters as the original layers, but there can be...