peft icon indicating copy to clipboard operation
peft copied to clipboard

fixing multiple LoRA in the same batch or vit

Open saeid93 opened this issue 1 year ago • 4 comments

@BenjaminBossan This is the initial fix for fixing the batched LoRA inference problem explained #1960 . For now, this only supports the vit model. This is an example and making a template for adding support for other models gradually. Generally there are two options to solve this according to the solution we discussed https://github.com/huggingface/peft/issues/1960#issuecomment-2252898911 since we need to change model specific details:

  1. Change the signature of the model forward functions in the transformer library. The problem with this approach is that it needs Peft specific logic in transformers which I'm not sure is the best way for a general purpose library like transformers.
  2. Change the forward function in Peft and patch it dynamically when the multiple LoRA request are in the inference batch.

I'm doing the second approach but each model needs different changes. Also, generate functions for generative models should be added. I'm happy to go through models one by one and also fix #1967, but it is better to review this first and then decide whether we want to go down this route of dynamically patching the forward functions or fixing it in the transformers library.

saeid93 avatar Aug 05 '24 14:08 saeid93

No problem! Glad to be of any help. Sorry, I couldn't find any comments on the pull request. Do you mean this comment on MobileVit issue? If so, the problem is different from this one, this is to solve this issue for multiple LoRA adapters. The other issue is a MobileVit specific problem.

saeid93 avatar Aug 06 '24 12:08 saeid93

Sorry, I couldn't find any comments on the pull request.

Wow, it's gone! No idea what happened, I did write it for sure...

Okay, so a second time. I was referring to these lines:

https://github.com/huggingface/peft/pull/1990/files#diff-b700510ad2034b549511a969d85f89f9094243a7f3c740e311dc1eb83ace9a79R57-R61

Are those the only real change to the forward function that are required? If yes, would it be possible to instead register a pre-forward hook for classifier to inject the argument? This could be easily achieved here:

https://github.com/huggingface/peft/blob/4611034ff85e26e1f9647ea1f505e9e50322ff0f/src/peft/tuners/lora/model.py#L434-L438

But maybe I'm missing something and more changes are necessary, or will be in the future for the issue. WDYT?

BenjaminBossan avatar Aug 06 '24 13:08 BenjaminBossan

Yes, your suggestion worked perfectly! Please check the new commit.

saeid93 avatar Aug 06 '24 17:08 saeid93

Sure, I'll be happy to add the tests. I'll add the updates when I get a chance.

saeid93 avatar Aug 07 '24 17:08 saeid93

@saeid93 Do you still plan on working on this?

BenjaminBossan avatar Aug 20 '24 09:08 BenjaminBossan

@saeid93 Do you still plan on working on this?

Yes, sorry I had some busy weeks, I'll work on it on this weekend if there isn't a strict deadline.

saeid93 avatar Aug 20 '24 09:08 saeid93

Yes, sorry I had some busy weeks, I'll work on it on this weekend if there isn't a strict deadline.

Thanks. No worries about the time, I just wanted to ensure that you're still on it. If not, that's also okay, just let me know.

BenjaminBossan avatar Aug 20 '24 10:08 BenjaminBossan

@BenjaminBossan I added the tests and also changed the code according to your comments, please let me know if further changes are required. About the docs, I wasn't sure if it is still necessary to add anything as I don't see any caveat regarding using module_to_save and the assumption is that modules_to_save is supported out of the box. But please let me know if you still think it should be updated.

saeid93 avatar Aug 24 '24 18:08 saeid93

@saeid93 LMK when this is ready for review.

BenjaminBossan avatar Sep 02 '24 09:09 BenjaminBossan

Gentle ping @saeid93

BenjaminBossan avatar Sep 12 '24 16:09 BenjaminBossan

@BenjaminBossan sorry for the delay, I was on annual leave. I made the changes you asked for, please let me know if you need any more changes.

saeid93 avatar Sep 15 '24 12:09 saeid93

@BenjaminBossan sure, glad to be of any help. I think all the comments have now been applied. I ran the style every time, probably it was the version mismatch, hopefully, it will go through this time. Let me know if further changes are needed.

saeid93 avatar Sep 16 '24 18:09 saeid93

Thanks for the latest changes @saeid93. The style check is still failing, did you successfully run make style? If that doesn't work for some reason, removing this import should be sufficient.

BenjaminBossan avatar Sep 17 '24 09:09 BenjaminBossan

@BenjaminBossan no problem! I did run the style, I'm not sure why it wasn't caught locally. I just removed the line. Hopefully, it will go through this time.

saeid93 avatar Sep 17 '24 09:09 saeid93

The linter is happy now :tada:. However, now we get an error with Python 3.8 because it does not support list[str] etc. Could you please add the from __future__ import annotations import to the top of utils/other.py, that should fix it.

BenjaminBossan avatar Sep 17 '24 10:09 BenjaminBossan

Done 👍 I had to make a small change and use tuple instead of Tuple to avoid ruff complaints after adding from __future__ import annotations.

saeid93 avatar Sep 17 '24 11:09 saeid93