ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[Feature] <ChatGLM好像不支持pipeline("text-generation")?>

Open shisi-cc opened this issue 1 year ago • 4 comments

Is your feature request related to a problem? Please describe.

No response

Solutions

想用pipeline做一下模型的推理,但是ChatGLM好像不支持pipeline("text-generation") 除了使用model.chat(),怎么样能让ChatGLM也能够使用pipeline呢?

报错是 The model 'ChatGLMForConditionalGeneration' is not supported for text-generation.

Additional context

No response

shisi-cc avatar Apr 14 '23 04:04 shisi-cc

+1 这个模型好像不支持transformer text-generation pipeline 不能很通用的插入其他语言得pipeline 只能call model.chat()

cxfcxf avatar Apr 26 '23 01:04 cxfcxf

我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)

llm_chain = LLMChain( llm=local_llm, prompt=prompt )

lyh007 avatar Jun 15 '23 01:06 lyh007

我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)

llm_chain = LLMChain( llm=local_llm, prompt=prompt )

1、llama-index项目里面的LLM不能替换成chatglm:官方替换LLM的2个方法(使用hugging face模型和自定义LLM,https://gpt-index.readthedocs.io/en/latest/how_to/customization/custom_llms.html) 都试过了,不兼容chatglm。 2、考虑继续使用langchain-chatglm项目:由于这种不兼容性,在chatglm项目里面,出现了这个issue:https://github.com/THUDM/ChatGLM-6B/issues/160。 这个issue的作者,就是langchain-chatglm项目的作者,他就是为了把langchain兼容到moss chatglm这些模型:https://github.com/imClumsyPanda/langchain-ChatGLM

digitalSquirrel1 avatar Jun 27 '23 01:06 digitalSquirrel1

我也有相同的问题,否则下面代码无法连接起来用,求详细指导@cxfcxf model_id = '/home/admin/huggingface/THUDM/chatglm-6b-int4' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True, revision="") model = AutoModelForSeq2SeqLM.from_pretrained(model_id, trust_remote_code=True, revision="").half().float() model = model.eval() pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=800) local_llm = HuggingFacePipeline(pipeline=pipe)

llm_chain = LLMChain( llm=local_llm, prompt=prompt )

这个问题解决了不呢,我也遇到这个问题了

BIM4SmartHydropower avatar Nov 09 '23 09:11 BIM4SmartHydropower