SimSIMD icon indicating copy to clipboard operation
SimSIMD copied to clipboard

Plan for BF16 datatype ?

Open pauldintel opened this issue 1 year ago • 12 comments

any optimization on SIMD plan for BFloat16 datatype ? thanks

pauldintel avatar Feb 07 '24 02:02 pauldintel

Hi @pauldintel! That shouldn’t be too hard to add and can help a lot on older x86 and newer mobile CPUs. Would you like to contribute? Any specific distance functions you are looking for?

ashvardanian avatar Feb 07 '24 02:02 ashvardanian

@ashvardanian

I'm adding support for this. Would it make sense for f16 and bf16 to use check_c_source_compiles in cmake to detect compiler support?

check_c_source_compiles(
  [=[
int
main(int argc, char *argv)
{
  __bf16 foo = 1.0;
  return 0;
}
]=]
  HAS_BFLOAT16)

We can retain the ability to disable with #define SIMSIMD_NATIVE_F16 0.

Note the bench disables native F16 - I think we can leave it on by default.

MarkReedZ avatar May 29 '24 21:05 MarkReedZ

Current benchmark results vs native f16

dot_bf16_serial_1536d/min_time:10.000/threads:12        1372 ns
cos_bf16_serial_1536d/min_time:10.000/threads:12        1485 ns
l2sq_bf16_serial_1536d/min_time:10.000/threads:12       1393 ns
kl_bf16_serial_1536d/min_time:10.000/threads:12         3352 ns
js_bf16_serial_1536d/min_time:10.000/threads:12         5069 ns

dot_f16_serial_1536d/min_time:10.000/threads:12          264 ns
cos_f16_serial_1536d/min_time:10.000/threads:12          264 ns
l2sq_f16_serial_1536d/min_time:10.000/threads:12         264 ns
kl_f16_serial_1536d/min_time:10.000/threads:12          2983 ns
js_f16_serial_1536d/min_time:10.000/threads:12          7858 ns

MarkReedZ avatar May 29 '24 21:05 MarkReedZ

Yes, @MarkReedZ, the check_c_source_compiles makes a lot of sense! Can you please clarify the benchmarking results? I'd assume bf16 should be faster than f16`, so the duration/latency should be lower 🤔

ashvardanian avatar May 30 '24 01:05 ashvardanian

I put bf16, f16, and f32 dot_serial() in godbolt. You can add and remove flags (avx2, avx512fp16, etc) to see whats going on. Without flags the bf16 is longer. Is the compiler using avx2/avx512 on the f16 serial? That would explain the difference.

https://godbolt.org/z/EKE66h9GM

The bf16 and unsigned short f16 have the same performance in dot/cos/l2sg, but bf16 is faster in kl/js.

I'll play around with different compilers.

MarkReedZ avatar May 30 '24 03:05 MarkReedZ

Note avx512_bf16 only has support for conversion between bf16 and f32, and a dot product. So I believe our simd accelerated functions will be converting bf16 to f32, and running the f32 algorithms.

Or perhaps it is possible to do a bf16 -> f16 conversion if we can find a way to just shift the exponent.

MarkReedZ avatar May 30 '24 03:05 MarkReedZ

In most cases it would be better to perform dot products in bf16, upscaling and accumulating in f32 down the road.

ashvardanian avatar May 30 '24 03:05 ashvardanian

I added the conversion function for compilers that don't support __bf16

SIMSIMD_PUBLIC simsimd_f32_t simsimd_uncompress_bf16(unsigned short x) {
    unsigned int tmp = x << 16; // Zero extends the mantissa
    return *((float*)&tmp);
}

And using the conversion to f32 instead of the native bf16 we get almost the same timings as with plain f32.

unsigned short bf16 -> f32 conversion

dot_bf16_serial_1536d/min_time:10.000/threads:12          183 ns
cos_bf16_serial_1536d/min_time:10.000/threads:12          202 ns
l2sq_bf16_serial_1536d/min_time:10.000/threads:12         166 ns
kl_bf16_serial_1536d/min_time:10.000/threads:12          1505 ns
js_bf16_serial_1536d/min_time:10.000/threads:12          3795 ns

A PR will be up when I have a minute.

MarkReedZ avatar May 30 '24 19:05 MarkReedZ

@MarkReedZ which machine this benchmark is running for ? Intel Bf16 should show better results on 4th Gen Sapphire Rapids (SPR) with AMX accelerators enabled because BF16 supposed to show better results with matrix multiplication comparing FP16 . I am not from above which distance calculation require matrix mul operations.

pauldintel avatar May 31 '24 20:05 pauldintel

Alternatively, you can also test on AMD Genoa chips. Like Intel Sapphire Rapids they support AVX-512 BF16, unlike Intel - they don't support F16... so the relative win will be much larger.

ashvardanian avatar May 31 '24 21:05 ashvardanian

So far I know Genoa has no BF16 support as at this moment it works on Intel SPR with AMX acceleration only

pauldintel avatar May 31 '24 23:05 pauldintel

@pauldintel it should be.

ashvardanian avatar Jun 02 '24 01:06 ashvardanian

Hey @pauldintel! Have you had the chance to check the bf16 functionality? Have you ever tried to use AMX for vector-matrix operations, aka when one of the arguments contains just one non-zero row/column?

ashvardanian avatar Aug 29 '24 00:08 ashvardanian

@ashvardanian we have tested two matrix with inner product and Before use of AMX Inner product we runtime used Intel oneDNN to reorder F32 data to BF16 . Certain dataset and batch operations we have seen 1.51 to 14x improvement (64 dimension to 1024 dimensions) of FAISS IndexFlat Scalar IP and all with Intel AMX. FAISS IndexFlat BLAS IP shows upto 4.8X gain with AMX.

For single query, comparing to native fp32: FP32->BF16 AMX speedup about 4.85x perf BF16 AMX speedup about 33.7x perf

Hope that helps

pauldintel avatar Aug 29 '24 04:08 pauldintel