llm-course
llm-course copied to clipboard
Collaboration: Unsloth + llm-course
Hey @mlabonne! Actually found this repo via Linkedin! :) Happy New Year!
Had a look through your notebooks - they look sick! Interestingly I was trying myself to run axolotl via Google Colab to no avail.
Anyways I'm the maintainer of Unsloth, which makes QLoRA 2.2x faster and use 62% less memory! It would be awesome if we could somehow collaborate :)
I have a few examples:
- Mistral 7b + Alpaca: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing
- DPO Zephyr replication: https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing
- TinyLlama automatic RoPE Scaling from 2048 to 4096 tokens + full Alpaca dataset in 80 minutes. https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing (still running since TinyLlama was just released!)
Anyways great work again!
Hi @danielhanchen, cool I know Unsloth from r/LocalLlama. Do you have something particular in mind? We can continue the conversation on Twitter @maximelabonne if you don't mind.
thankiu I have same question
@mlabonne I'll bring the chat over to Twitter! Oh lol actually I dont have Twitter premium, so you first have to follow me :))
Unsloth has been added!