Awni Hannun
Awni Hannun
> it might be useful to have a section of examples on engineering/physics applications That's a great idea! We could potentially put a `physics/` directory in the MLX examples repo,...
> Does this mean that MLX is faster in creating the comp graphs but MLX conv2D is slower when it is time to do the actual computation? Exactly right.
Yes, int64 is not yet supported in our reductions, that's expected. It's definitely possible to add, but not the highest priority at the moment. Happy to take a contribution for...
Thanks @vj-krish !!
Good question, right now the `constant_values` is really just a scalar. We should probably throw or something if the wrong shape is provided. I think in Numpy the `constant_values` need...
See https://github.com/ml-explore/mlx-examples/issues/70 for some ideas around how to reduce Lora memory consumption until we have quantization.
There is a maximum size you can allocate into a single buffer (which is a machine specific property). I think it is less than 9.8 GB for you. But either...
> It is a 16GB Air M1, do you happen to know a ballpark of the limit? I don't know but you could try running this until it breaks: ```python...
I'm going to close this issue as I'm not sure why it's still open. Feel free to file a new issue if you are still having issues with memory allocation.
@dougdew64 sorry I somehow missed this thread 😓 . There is an issue filed #410 on this exact problem. I think the warnings in this case are about benign issues,...