Awni Hannun
Awni Hannun
Indeed, we are aware that there are performance cliffs in our convolutions, see e.g. #1313 Thanks for the benchmark though! We will make sure to include the 3D stuff in...
You have a couple of options: - use the Python buffer protocol `memoryview(a)` - get the DLPack capsule `a.__dlpack__()` More info in the docs on [converting to other frameworks](https://ml-explore.github.io/mlx/build/html/usage/numpy.html#).
That is unfortunately expected behavior. Right now `svd` (and several other [linalg operations](https://ml-explore.github.io/mlx/build/html/python/linalg.html) are only supported on the CPU back-end. You can fix that by passing in the CPU stream...
I'm going to change this from a bug to a feature request and mark it as such. Note it's not a trivial op to implement on the GPU so it...
I don't know of anyone working on this. Happy to accept a contribution.
No progress :\
Can you try running on 0.23.1 or higher with these two env variables set: ``` MLX_MAX_OPS_PER_BUFFER=8 MLX_MAX_MB_PER_BUFFER=1000000 python uv run --with mlx==0.23.1 something.py ```
> Is there something I need to know on the subject to choose the right values? Ideally not. We want to set these so they work reasonably well for the...
Would you mind sharing one more thing: ``` ioreg -l | grep gpu-core-count ``` We may need to do more fine-grained settings for those variables based on the GPU core...
One option is to go the route of cloning scipy in MXL kind of like Jax (e.g. put this in a new package `mlx.scipy.signal`). It's a big package so I'm...