lorax
lorax copied to clipboard
feat: support lazy loading the lora module for reducing the loading p…
What does this PR do?
Fixes #433
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Was this discussed/approved via a Github issue or the discord / slack channel? Please add a link to it if that's the case.
- [ ] Did you write any new necessary tests?
Who can review?
@tgaddair
It seems that caching the handle from safe_open might be a better solution, but need to consider the file handle reference management that used by multiple layers, I will refine it later.
It seems that caching the handle from safe_open might be a better solution, but need to consider the file handle reference management that used by multiple layers, I will refine it later.
Still keep cache the filenames instead of filehandles, since that 1) safe_open needs the device info which differs during loading the lora modules, 2) safe_open is lazy loading until specific tensor loaded by get_tensor invoked, which is already the optimized behavior for our case.
@tgaddair could you help review this change ?
Looks great @thincal, thanks for the PR, and apologies for the slow review!
I had one question about the file handle, but happy to land this and iterate on it to see if there's any room to further optimize.
It is fine to land it firstly, since the safe_open is already lazy behavior and main overhead is about reading out the specific tensor.
@thincal I noticed there's a failing test:
FAILED server/tests/adapters/test_medusa.py::test_batched_medusa_weights - safetensors_rust.SafetensorError: device cpu is invalid
Would you be able to take a look before we merge? We should be good to go once that's resolved.
@thincal I noticed there's a failing test:
FAILED server/tests/adapters/test_medusa.py::test_batched_medusa_weights - safetensors_rust.SafetensorError: device cpu is invalidWould you be able to take a look before we merge? We should be good to go once that's resolved.
OK, I will finish it today, thanks.
@tgaddair fix passed for server/tests/adapters/test_medusa.py::test_batched_medusa_weights, the remained errors seem related with repo access failure, could you help have a check ?
@tgaddair ping, sorry for my late response. Could you help review the revised commit? Thanks.
Hey @thincal, very sorry for the delay here. Let me take a look now.
@tgaddair it's glad to see this change be merged, and thanks for your support :)