Why the linear input for Layer.pack "must be of type `torch.half`"?
It seems that if we remove assert in the Layer.pack, then we can pack an bf16 linear?
By the way, will marlin support "int4 \times bf16" as input?
Hi, Marlin currently does not support BF16 inputs (though in many cases you can just convert your BF16 model to FP16). These require slightly different GPU instructions as well as a slightly different dequantization process (since there are only 5 mantissa bits). This is also why Layer.pack has a corresponding assert.
hi, I notice that the core mma function for bf16 is supported by vllm's gptq marlin. It seems that a few changes can do this feature. https://github.com/vllm-project/vllm/blob/main/csrc/quantization/gptq_marlin/gptq_marlin.cu#L89
I really need this bf16 input (actually most model is by bf16 now), if it doesn't take much time can you merge that feature in the marlin repo? If not, I can do this later.
@Azure-Tang Is bf16 support done? Have you made a PR elsewhere?
@Azure-Tang Is bf16 support done? Have you made a PR elsewhere?
Didnt donw yet, maybe next week?