ChatGLM-6B
ChatGLM-6B copied to clipboard
[Feature] <ChatGLM好像不支持pipeline("text-generation")?>
Is your feature request related to a problem? Please describe.
No response
Solutions
想用pipeline做一下模型的推理,但是ChatGLM好像不支持pipeline("text-generation") 除了使用model.chat(),怎么样能让ChatGLM也能够使用pipeline呢?
报错是 The model 'ChatGLMForConditionalGeneration' is not supported for text-generation.
Additional context
No response
+1 这个模型好像不支持transformer text-generation pipeline 不能很通用的插入其他语言得pipeline 只能call model.chat()
我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)
llm_chain = LLMChain( llm=local_llm, prompt=prompt )
我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)
llm_chain = LLMChain( llm=local_llm, prompt=prompt )
1、llama-index项目里面的LLM不能替换成chatglm:官方替换LLM的2个方法(使用hugging face模型和自定义LLM,https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_llms.html) 都试过了,不兼容chatglm。 2、考虑继续使用langchain-chatglm项目:由于这种不兼容性,在chatglm项目里面,出现了这个issue:https://github.com/THUDM/ChatGLM-6B/issues/160。 这个issue的作者,就是langchain-chatglm项目的作者,他就是为了把langchain兼容到moss chatglm这些模型:https://github.com/imClumsyPanda/langchain-ChatGLM
我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)
llm_chain = LLMChain( llm=local_llm, prompt=prompt )
这个问题解决了不呢,我也遇到这个问题了