`02_pytorch_extension_grouped_gemm.ipynb` No kernel configuration found for supported data type and layout combination (<DataType.bf16: 16>
Describe the bug I followed 02_pytorch_extension_grouped_gemm.ipynb. And I change dtype from torch.float16 to torch.bfloat16
import cutlass
import torch
dtype = torch.bfloat16
plan = cutlass.op.GroupedGemm(element=dtype, layout=cutlass.LayoutType.RowMajor)
op = plan.construct()
grouped_gemm = cutlass.emit.pytorch(op, name='grouped_gemm', cc=plan.cc, sourcedir='out', jit=True)
it raises error
File "/opt/conda/lib/python3.10/site-packages/cutlass/op/gemm.py", line 300, in _reset_operations
raise Exception(f'No kernel configuration found for supported data type and layout '
Exception: No kernel configuration found for supported data type and layout combination (<DataType.bf16: 16>, <DataType.bf16: 16>, <DataType.bf16: 16>)x(<LayoutType.RowMajor: 2>, <LayoutType.RowMajor: 2>)
Is torch.bfloat16 not supportted?
why torch.bfloat16 is not included?
_CUTLASS_TYPE_TO_TORCH_TYPE = {
DataType.f16: "torch::kF16",
DataType.f32: "torch::kF32",
DataType.f64: "torch::kF64",
DataType.s8: "torch::I8",
DataType.s32: "torch::I32",
}
Please see https://github.com/NVIDIA/cutlass/issues/1736#issuecomment-2305319679
This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.
This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.