mlx icon indicating copy to clipboard operation
mlx copied to clipboard

[Feature] Matmul for CPU

Open zcbenz opened this issue 1 year ago • 3 comments

Some of the most popular models provide weights in bfloat16, which unfortunately can not load on CPU because Matmul::eval_cpu only supports float32.

I know CPU support is not on priority, but it would be great if my code can run on other platforms than mac arm64 even being very slow.

zcbenz avatar Oct 22 '24 00:10 zcbenz

maybe this can be also interesting to look at https://github.com/microsoft/BitNet

thegodone avatar Oct 22 '24 06:10 thegodone

Are there plans for supporting integer tensors in tensordot/matmul?

polvalente avatar Nov 24 '24 19:11 polvalente

We're not opposed to having integer support for matmul, but it's not an active priority at the moment.

awni avatar Nov 25 '24 00:11 awni

Closing as mlx has cpu matmul for float16/bfloat16 now.

zcbenz avatar Nov 11 '25 00:11 zcbenz