Results 6 comments of Gus

I'll try this in a few days! Thank you!

I have a 16GB MacBook Pro with an M2 Pro chip and encountered this exact error at 60 epochs when I increased the training data size, batch size, and layers....

I was running the command below. It worked when I reduced the GPU and ram analogous parameters below. ``` !python -m mlx_lm.lora \ --model mlx-community/Meta-Llama-3-8B-Instruct-4bit \ --train \ --batch-size 1...

I also added a yaml file for Llama3 ``` original_repo: meta-llama/Meta-Llama-3-8B-Instruct mlx-repo: mlx-community/Meta-Llama-3-8B-Instruct-4bit quantize: 4bit default_language: en ```

I am glad I could help! This project is great thank you for working on it! :D