xlora icon indicating copy to clipboard operation
xlora copied to clipboard

Would you kindly update Xlora to support Quantized Models?

Open Abdullah-kwl opened this issue 1 year ago • 9 comments

to train xlora on free collab we need to load a quantized model but currently, xlora does not support the quantized model and layers are not swapping. Please upgrade xlora for the quantized model, mostly uses BitsAndBytesConfig to load the model in 4-bit or 8bit in free collab, But the quantized model could not convert into xlora so please update xlora for quantized models. Screenshot 2024-03-20 052216

Abdullah-kwl avatar Mar 24 '24 18:03 Abdullah-kwl

@Abdullah-kwl , could you please paste the result of printing model?

EricLBuehler avatar Mar 25 '24 12:03 EricLBuehler

PeftModelForCausalLM( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096, padding_idx=2) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (k_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (v_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (o_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (down_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=14336, out_features=4, bias=False) (adapter_2): Linear(in_features=14336, out_features=4, bias=False) (adapter_3): Linear(in_features=14336, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=4096, bias=False) (adapter_2): Linear(in_features=4, out_features=4096, bias=False) (adapter_3): Linear(in_features=4, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): CastOutputToFloat( (0): Linear(in_features=4096, out_features=32000, bias=False) ) ) ) (internal_xlora_classifier): xLoRAClassifier( (softmax): TemperatureScaledSoftmax( (softmax): Softmax(dim=-1) ) (inner): ModuleList( (0): Linear(in_features=4096, out_features=2048, bias=True) (1-6): 6 x Linear(in_features=2048, out_features=2048, bias=True) ) (last): Linear(in_features=2048, out_features=3, bias=True) ) )

Abdullah-kwl avatar Mar 25 '24 12:03 Abdullah-kwl

I have tested your updated code https://github.com/EricLBuehler/xlora/pull/25

currently quantized model are trained using xlora , it start working with quantized model but facing issue when I want to make inference with trained quantized xlora model.

facing error RecursionError: maximum recursion depth exceeded while calling a Python object

Screenshot 2024-03-26 170126 Screenshot 2024-03-26 170248

Abdullah-kwl avatar Mar 26 '24 12:03 Abdullah-kwl

You can review my notebook at : https://colab.research.google.com/drive/1_B1ualsMbRfYWy0gdjdMi9RSDU-qmPHf#scrollTo=I4UZaqDAnnB6

Abdullah-kwl avatar Mar 26 '24 12:03 Abdullah-kwl

Thank you. I plan on working on this later today.

EricLBuehler avatar Mar 26 '24 12:03 EricLBuehler

Also, Checkout this notebook : https://colab.research.google.com/drive/1Eyh-mBd0LpcJwyzBHjGKhwNLQ9R74eLl?usp=drive_open

Verify that a few lines are being repeated in the output.

Abdullah-kwl avatar Mar 28 '24 11:03 Abdullah-kwl

What adjustments should we make if we wish to upgrade XLora for IA^3?

Abdullah-kwl avatar Mar 28 '24 11:03 Abdullah-kwl

@Abdullah-kwl, we have begun work here and it will be completed shortly.

EricLBuehler avatar Apr 15 '24 15:04 EricLBuehler

Hi @EricLBuehler ,

Just wanted to make sure that the current version supports Quantised models since I think some tests haven't been passed here, and the commit hasn't been merged to the main branch.

TheTahaaa avatar Aug 25 '24 18:08 TheTahaaa