Implement some custom fb op out variant kernels
Summary: Implement an out variant version of tbe_input_combine_with_length, offsets_to_lengths, and lenghts_to_offsets, and add a skeleton for custom fb op static kernel dispatch in sigmoid.
Also start adding native kernels (ie. kernels with no out variant but for which we can directly go to the native implementation rather than first going torch torch dispatcher)
Reviewed By: henryoier
Differential Revision: D57453462
This pull request was exported from Phabricator. Differential Revision: D57453462
Deploy Preview for pytorch-fbgemm-docs ready!
| Name | Link |
|---|---|
| Latest commit | 4e2180b5321060225c40155ff4a46b297a966ec4 |
| Latest deploy log | https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/668ffbc1343a910007222651 |
| Deploy Preview | https://deploy-preview-2793--pytorch-fbgemm-docs.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
This pull request was exported from Phabricator. Differential Revision: D57453462
This pull request was exported from Phabricator. Differential Revision: D57453462
This pull request was exported from Phabricator. Differential Revision: D57453462
This pull request has been merged in pytorch/FBGEMM@2bd3222ee605aa8cdfa5e459d24ec01c1e257fdd.