mlx-examples icon indicating copy to clipboard operation
mlx-examples copied to clipboard

Examples in the MLX framework

Results 197 mlx-examples issues
Sort by recently updated
recently updated
newest added

Hello, Thank you for creating this outstanding tool. I have a feature suggestion, if it's feasible, please consider it. I can give it a try as well. It would be...

I'm interested in integrating [RWKV v5](https://github.com/BlinkDL/RWKV-LM) with MLX. Notably, RWKV uses [CUDA as its kernel](https://github.com/BlinkDL/RWKV-LM/tree/main/RWKV-v5/cuda). I want to work on this to learn and any guidance on this process would...

What is the difference between mlx model and hugging face model? I notice there is the weight file *.npz, is this file a part of mlx model, if I want...

When fine-tuning Mistral 7B in 4-bit quantization (qlora), I'm seeing huge memory usage (160GB VRAM) Parameters used: - `--batch-size 1` - `--lora-layers 16` The dataset is composed of around 1200...

so far this is blocked on: - https://github.com/ml-explore/mlx/issues/100 because conv2d groups param is needed

I'm trying to run Stable diffusion txt2image.py with float16 dtype on M1 8GB iMac, since float32 dtype requires more than 8 GB memory. I modified this line of code https://github.com/ml-explore/mlx-examples/blob/e9b32747b424468eabb5a7f0609f275637e1a0c3/stable_diffusion/txt2image.py#L26...

In the first step, if I didn't use `python convert.py -q` to generate a quantized model, is that mean it is unnecessay to use `-d, --de-quantize` parameter to generate a...

Hi all! Is it possible to convert a medical image segmentation model trained using the [nnU-Net framework](https://github.com/MIC-DKFZ/nnUNet) and stored as a `.pth` file into an MLX compatible format? Thanks!

A request for information/documentation rather than an 'issue', but I've been trying to track and document the diffusion process in `image2image.py` in `mlx-examples` from start to finish, and I can...

I noticed today that when I use python -m mlx_lm.generate the output doesn't match what I get locally using python lora.py. For example: Local output using lora adapters: ``` (base)...