transformers
transformers copied to clipboard
FEAT WANT: Dynamic quantization config for bitsandbytes
Feature request
Will transformers support dynamic quantization config for bitsandbytes? Currently transformers support hqq dynamic quantization, via
q4_config = {"nbits": 4, "group_size": 64}
q8_config = {"nbits": 8, "group_size": 64}
quant_config = HqqConfig(
dynamic_config={
"self_attn.q_proj": q4_config,
"self_attn.k_proj": q4_config,
"self_attn.v_proj": q4_config,
"self_attn.o_proj": q4_config,
"mlp.gate_proj": q8_config,
"mlp.up_proj": q4_config,
"mlp.down_proj": q4_config,
}
)
cc @MekkCyber
Hey @AaronZLT! I think this would be complicated but definitely possible, because we need to combine : Bnb4BitHfQuantizer and Bnb8BitHfQuantizer! Would you be open to submitting a PR?
@MekkCyber can you give me more information about that i am interested on this
Quantization Config:
BitsAndBytesConfig {
"model.decoder.layers.0.self_attn.q_proj": {
"_load_in_4bit": true,
"_load_in_8bit": false,
"bnb_4bit_compute_dtype": "float32",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "fp4",
"bnb_4bit_use_double_quant": false,
"dynamic_config": null,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false
},
"model.decoder.layers.1.self_attn.k_proj": {
"_load_in_4bit": false,
"_load_in_8bit": true,
"bnb_4bit_compute_dtype": "float32",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "fp4",
"bnb_4bit_use_double_quant": false,
"dynamic_config": null,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": false,
"load_in_8bit": true
},
"quant_method": "bitsandbytes"
}
Model Config:
BloomConfig {
"apply_residual_connection_post_layernorm": false,
"architectures": [
"BloomForCausalLM"
],
"attention_dropout": 0.0,
"attention_softmax_in_fp32": true,
"bias_dropout_fusion": true,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_dropout": 0.0,
"hidden_size": 2048,
"initializer_range":
"quantization_config": {
"model.decoder.layers.0.self_attn.q_proj": {
"_load_in_4bit": true,
"_load_in_8bit": false,
"bnb_4bit_compute_dtype": "float32",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "fp4",
"bnb_4bit_use_double_quant": false,
"dynamic_config": null,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false
},
"model.decoder.layers.1.self_attn.k_proj": {
" _load_in_4bit": false,
"_load_in_8bit": true,
"bnb_4bit_compute_dtype": "float32",
"bnb_4bit_quant_storage": "uint8",
"bnb_4bit_quant_type": "fp4",
"bnb_4bit_use_double_quant": false,
"dynamic_config": null,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": false,
"load_in_8bit": true
},
"quant_method": "bitsandbytes"
},
"seq_length": 4096,
"skip_bias_add": true,
.................
}
is ok tell me about that any more guidance for me or something i miss out @Rocketknight1 @AaronZLT @MekkCyber please pay attention over here