FBGEMM
FBGEMM copied to clipboard
Fix meta function for merge_pooled_embeddings
Summary: Meta fn should contain device information (FakeTensor) for merge_pooled_embeddings.
BEFORE dynamo export fails with
File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/5408a275d581d7c2/scripts/ads_pt2_inference/__pt2_cli__/pt2_cli#link-tree/torch/_subclasses/fake_tensor.py", line 1264, in merge_devices
raise RuntimeError(
torch._dynamo.exc.TorchRuntimeError: Failed running call_function fbgemm.permute_pooled_embs_auto_grad(*(FakeTensor(..., device='meta', size=(10, 5124)), FakeTensor(..., device='cuda:0', size=(40,), dtype=torch.int64), FakeTensor(..., device='cuda:0', size=(39,), dtype=torch.int64), FakeTensor(..., device='cuda:0', size=(40,), dtype=torch.int64), FakeTensor(..., device='cuda:0', size=(39,), dtype=torch.int64)), **{}):
Unhandled FakeTensor Device Propagation for fbgemm.permute_pooled_embs.default, found two different devices meta, cuda:0
Reviewed By: ezyang
Differential Revision: D51121648
Privacy Context Container: L1138451
Deploy Preview for pytorch-fbgemm-docs canceled.
| Name | Link |
|---|---|
| Latest commit | 7792c1e48fb51200aea5739da858d19ec92cc6ba |
| Latest deploy log | https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/654bd77ad7224c0008442ffc |
This pull request was exported from Phabricator. Differential Revision: D51121648