pytorch_cluster
pytorch_cluster copied to clipboard
Still there: Incompatibility with bfloat16
There was an issue filed previously of lack of support for bfloat16 - looks like it's still there. I am getting the same error : RuntimeError: "_" not implemented for 'BFloat16' running the code below:
from torch_cluster import radius, radius_graph
import torch
from torch import tensor
x = tensor([[ 4.0625, -27.3750, -4.3438],
[ 3.0312, -27.6250, -3.4844],
[ 5.0312, -28.5000, -4.2812],
[ -8.1875, -17.7500, -2.9062],
[ -8.1875, -19.0000, -2.2812],
[ -8.0625, -20.2500, -2.9688]], device='cuda:0', dtype=torch.bfloat16)
radius_inp = (
x,
5.0,
tensor([0, 0, 0, 0, 0, 0], device='cuda:0'),
10
)
radius_edges = radius_graph(*radius_inp)
Same issue with radius() call. This is for the package built from the trunk, on A6000 box.
Currently, bfloat16 support only exists on CPU :(
Any plans to include bfloat16 support on GPU soon ?
Currently no, since this repo is no longer in active development.
This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?