CUDA & CPU: support F32 kernel type for `CONV_TRANSPOSE_2D`
also updated test case in test-backend-ops.
But since F32 kernel type is not supported on CPU, only GGML_TYPE_F16 is kept and GGML_TYPE_F32 can be uncommented back in the future.
Does this PR make a difference to something? From what I understand, the kernel value is upcast into float before doing any accumulation (and accumulation is anyway in f32). So unless there are kernels around which don't fit into f16 I don't see a benefit to supporting this, especially when we don't support the f16 inputs yet (which incidentally might be more relevant than kernels being f32 as we could potentially do
half2multiplications)
So the motivations of this PR are:
-
Currently in
ggml_backend_cuda_device_supports_opit always returns true forGGML_OP_CONV_TRANSPOSE_2Dwithout checking the kernel type, thus may cause crashes when actually computing. This PR fixs this mismatching behavior.https://github.com/ggml-org/llama.cpp/blob/8e878f0cb4c893de23455dd0a6bfbbb21bcaad89/ggml/src/ggml-cuda/ggml-cuda.cu#L4061-L4064
-
Some recent models are natively BF16, and using F16 kernel can lead to overflows. F32 is safe here and can be readily used for precision verification.
So the motivations of this PR are:
- Currently in
ggml_backend_cuda_device_supports_opit always returns true forGGML_OP_CONV_TRANSPOSE_2Dwithout checking the kernel type, thus may cause crashes when actually computing. This PR fixs this mismatching behavior.
That's because it matches the CPU capabilities exactly
- Some recent models are natively BF16, and using F16 kernel can lead to overflows. F32 is safe here and can be readily used for precision verification.
That would be a problem in a conversion to GGUF, not necessarily a problem to be solved here.
You should add the CPU version for the f32 kernel too, that way this PR makes more sense
Hi @am17an, thanks for reviewing this PR.
Here’s what has been updated:
- Simplified CUDA kernel dispatch logic.
- Renamed
type_kerneltokernel_type. - Introduced a templated
ggml_compute_forward_conv_2d_transpose_implto reduce duplication.
Please let me know if there’s anything else you’d like changed.