LLaMA-8bit-LoRA
LLaMA-8bit-LoRA copied to clipboard
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
Chat LLaMA
8bit-LoRA or 4bit-LoRA
Repository for training a LoRA for the LLaMA (1 and 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only for LLaMA 1, LLaMA 2 is open commercially.
👉 Join our Discord Server for updates, support & collaboration
Dataset creation, training, weight merging, and quantization instructions are in the docs.