NeMo icon indicating copy to clipboard operation
NeMo copied to clipboard

[peft] align adapter output shape with wrapped module output shape

Open guyueh1 opened this issue 6 months ago • 2 comments

[!IMPORTANT]
The Update branch button must only be pressed in very rare occassions. An outdated branch is never blocking the merge of a PR. Please reach out to the automation team before pressing that button.

What does this PR do ?

[peft] align adapter output shape with wrapped module output shape.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR. To re-run CI remove and add the label again. To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • [ ] Make sure you read and followed Contributor guidelines
  • [ ] Did you write any new necessary tests?
  • [ ] Did you add or update any necessary documentation?
  • [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • [ ] Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • [ ] New Feature
  • [ ] Bugfix
  • [ ] Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

guyueh1 avatar Jun 14 '25 04:06 guyueh1

@cuichenx This is a change to resolve an OOM issue in our testing, for now I only understand the symptom which is: linear_output returns a tensor of shape (batch*seq, 1, hidden) but adapter_output returns a tensor of shape (batch*seq, hidden). So due to dimension mismatch, when adding them torch tries to allocate (batch*seq, batch*seq, hidden) tensor that caused error. But why the linear_output tensor shape has changed to this, I haven't figured out yet.

guyueh1 avatar Jun 15 '25 21:06 guyueh1

I think adding (batch*seq, 1, hidden) and (batch*seq, hidden) won't just OOM -- it will not add at all, right? I'm not sure how you got there. When I tested just now, both tensors have shape (seq, batch, hidden)

cuichenx avatar Jun 16 '25 01:06 cuichenx