FBGEMM
FBGEMM copied to clipboard
[comm][ROCm] move memory copy into one_shot_all_reduce
Avoid latency of launching hipMemcpyAsync. Could see 3-4us reduction in benchmarking. Also see improvements in end to end testing.
Hi @wenkaidu!
Thank you for your pull request and welcome to our community.
Action Required
In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.
Process
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at [email protected]. Thanks!
Deploy Preview for pytorch-fbgemm-docs ready!
| Name | Link |
|---|---|
| Latest commit | 0ff414b2c97df7dd10f6f3022a639711550a270f |
| Latest deploy log | https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/6668f968bf785e0008295e19 |
| Deploy Preview | https://deploy-preview-2693--pytorch-fbgemm-docs.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
@xw285cornell has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
Thanks @wenkaidu ! Just wondering, what's this diff's impact on CUDA/H100? will that be generally beneficial?
Thanks @wenkaidu ! Just wondering, what's this diff's impact on CUDA/H100? will that be generally beneficial?
I heard cudaMemcpy is more performant than copy from kernel. But I am not sure which method is faster with small data sizes. I don't have access to H100 HW and SW setup. If someone have setup, please run a quick check.
@xw285cornell has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@wenkaidu sorry I have to create another PR #2770 to fix some formatting issue (and I cannot re-export this PR). Thanks again for your contribution!