pyg-lib icon indicating copy to clipboard operation
pyg-lib copied to clipboard

`bfloat16` and FP16 support for custom kernels

Open sidnb13 opened this issue 1 year ago • 3 comments

🚀 The feature, motivation and pitch

As the kernels seem to be limited to the FP32 data type at the moment, it would be immensely helpful to have the implementations support mixed precision computations (FP16 and BF16) as well. This would be helpful for broader ranging applications in NLP, not just in graph neural nets.

How involved would enabling mixed-precision computations be? Any pointers to potentially start a PR?

Alternatives

No response

Additional context

No response

sidnb13 avatar Oct 31 '23 18:10 sidnb13

So adding on to @sidnb13 's comments here, it looks like segment_matmul just takes in two Tensor types here which are simply torch.Tensors, right? And torch.Tensor does have native support for bfloat16 / torch.float16 / torch.half, right? The weird thing is that when one tries to run segment_matmul on two tensors cast to bfloat16, you get this error:

Screenshot 2023-11-01 at 7 44 47 PM

finndayton avatar Nov 02 '23 02:11 finndayton

@DamianSzwichtenberg

rusty1s avatar Nov 02 '23 08:11 rusty1s

(segment|grouped)_matmul had an incomplete dispatch types set, I've fixed CPU implementation with pyg-lib @ 272 (@puririshi98 could you please take a look at CUDA implementation?). If you find any custom operation that is lacking bf16 support you can take a look at @yanbing-j PRs, e.g. pytorch_scatter @ 316 and pytorch_scatter @ 375.

DamianSzwichtenberg avatar Nov 02 '23 11:11 DamianSzwichtenberg