Does AutoAWQ support to quantize GLM-4-9B-Chat and ChatGLM3-6B two models?
TypeError Traceback (most recent call last) Cell In[2], line 2 1 # 加载模型 ----> 2 model = AutoAWQForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True) 3 tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
File ~/miniforge3/envs/peft/lib/python3.10/site-packages/awq/models/auto.py:55, in AutoAWQForCausalLM.from_pretrained(self, model_path, trust_remote_code, safetensors, device_map, **model_init_kwargs) 46 @classmethod 47 def from_pretrained( 48 self, (...) 53 **model_init_kwargs, 54 ) -> BaseAWQForCausalLM: ---> 55 model_type = check_and_get_model_type( 56 model_path, trust_remote_code, **model_init_kwargs 57 ) 59 return AWQ_CAUSAL_LM_MODEL_MAP[model_type].from_pretrained( 60 model_path, 61 model_type, (...) 65 **model_init_kwargs, 66 )
File ~/miniforge3/envs/peft/lib/python3.10/site-packages/awq/models/auto.py:34, in check_and_get_model_type(model_dir, trust_remote_code, **model_init_kwargs) ... ---> 34 raise TypeError(f"{config.model_type} isn't supported yet.") 35 model_type = config.model_type 36 return model_type
TypeError: chatglm isn't supported yet.