Awni Hannun
Awni Hannun
Indeed complex64 is not up to standard yet. I think the reduction is not supported for complex! We should definitely fix that and the `real`/`imag` ops.
The docs on that page are all autogenerated from the source code which builds the [array bindings](https://github.com/ml-explore/mlx/blob/main/python/src/array.cpp). Also you should not need to use the `gh-pages` branch to build the...
Also the auto generated docs from C++ won't change unless you rebuild the source code (this can be a bit annoying), but if you are changing anything in the C++...
It's hard to give one answer to that question. But for example if you want to add an example to a core operation, you can do it in the C++...
> Hmm, so one way you are currently suggesting is to directly add code snippets as doc strings. The docstrings get added into the documentation for the operation. See for...
Exactly :)
Hey @wjessup, we are working hard on perf right now. Sorry for not adding commentary to this benchmark. A lot of the work we are doing is likely to improve...
Also @wjessup I notice that you are benchmarking torch on the CPU with MLX on the gpu. You should compare the same device for both. For small ops the CPU...
What do you think about waiting for multi-output primitives and doing a primitive for this instead?
Should we close this PR? I don't think we intend to merge it right?