Awni Hannun

Results 1014 comments of Awni Hannun

@ngam could you say more about packaging the library correctly? It seems to work fine with PyPi? I tested it here: https://test.pypi.org/project/awni-test-mlx/0.0.7/

Oh ok, let me know if we need to change anything on our end!

@ngam it turns out your were 100% right, the packaging was broken (🤦 I didn't consider that I had gguf already installed on the machines I tested on). We fixed...

This is so awesome! One more dumb question: as we do new releases, how does the conda forge distribution get updated?

Yes that's an oversight, the Mistral example does fp16, but the llama does fp32 by defualt since that's what the weights are saved in. You can see an example of...

Thanks! And thanks for setting this up. I will plan to add the new install path to our docs

Is anyone working on groups for convolution? E.g. @Stealeristaken ? I am closing https://github.com/ml-explore/mlx/pull/637 as it's inactive but it might be useful as a starting point.

I don't think anyone is actively working on it, would be great to have you work on it @Rifur13 !

We would love to have these operations available directly in MLX. It's not our top top priority but something we intend to add in the future or even better accept...

So you can look at how `mlx.core.random` works. We could do something similar for `mlx.core.linalg`. Basically it's a nested namespace on the C++ side `mlx::core::random` and then we make it...