Benjamin Bossan

Results 584 comments of Benjamin Bossan

@derekelewis If you have a minimal reproducer to share, that would be great.

@derekelewis Thanks for the script. Unfortunately I could not run it due to memory constraints, but it's still helpful. I can spot 3 potential issues: 1. You're using a rank...

@Vermeille If you could share a minimal reproducer, we could take a look, otherwise it's going to be hard for us to help.

@joann-alvarez When it comes to "q_proj", "v_proj", "down_proj", those are just standard linear layers that are commonly targeted. The reason why I suggested applying LoRA to the "lm_head" and "embed_tokens"...

Sorry, I don't understand everything that you wrote. There should be an `adapter_config.json` for your PEFT adapter, probably inside of `/home/Baichuan2/Langchain-Chatchat/peft/peft-HitoGpt`. Could you paste its content here?

Thanks. This does not look like a `adapter_config.json` from PEFT. [Here](https://huggingface.co/peft-internal-testing/gpt2-lora-random/blob/main/adapter_config.json) is an example of a correct PEFT config file. The file you show looks like a `config.json` from transformers....

Why did you rename `config.json` to `adapter_config.json`? Those are not the same thing. Based on the file sizes you show, this is a full model, not a PEFT adapter. Try...

I'm still withholding review to wait for > will go live in v4.41.0 and stay until 4.46.0 as detailed in the thread (6 months)

@muellerzr Has the time come now?