ROCM backend sync
Hi @fsx950223!
Thank you for your pull request and welcome to our community.
Action Required
In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.
Process
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at [email protected]. Thanks!
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!
A brief description of this PR:
- fix kernel profiler on rocm
- bert embedding(embedding+layernorm) fusion support on rocm
-
conv2d_bias_addfusion support on rocm -
bmm_*_addfusion support on rocm - support stride in rocm
bmm/layernormby modifying tensor accessor - gemm+bias+fast-gelu (hardswitch) support on rocm
- update rocm Dockerfile to 5.3
- update CK version with various codegen modification
Thanks @fsx950223 for your fix and adding the AMD CI! For some reason the CircleCI pipeline fails, will work on manually merge the PR into our internal repo and run some tests.
And it doesn't seem like the AMD CI is triggered. Has it been enabled successfully? @fsx950223
@Yanxing-Shi Could you take a look at the failure which is related to ops.size?
@ipiszy @chenyang78 Is it time to enable rocm ci? It seems that you need to approve it.