ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

run GLM4-9b-chat on MTL iGPU get error: ValueError: too many values to unpack (expected 2)

Open johnysh opened this issue 1 year ago • 4 comments

I ran glm4 on MTL iGPU , and it reported this error: image

oneAPI: l_BaseKit_p_2024.0.1.46_offline.sh

My env as below: (notebook-zone) arc@arc:~/ipex-llm/python/llm/example/GPU/HuggingFace/LLM/glm4$ pip list Package Version


accelerate 0.23.0 addict 2.4.0 aiofiles 23.2.1 aiohttp 3.9.5 aiosignal 1.3.1 aliyun-python-sdk-core 2.15.1 aliyun-python-sdk-kms 2.16.3 altair 5.3.0 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 anyio 4.4.0 asttokens 2.4.1 async-timeout 4.0.3 attrs 23.2.0 bigdl-core-cpp 2.5.0b20240610 bigdl-core-xe-21 2.5.0b20240723 bigdl-core-xe-addons-21 2.5.0b20240723 bigdl-core-xe-batch-21 2.5.0b20240723 bigdl-core-xe-esimd-21 2.5.0b20240528 bitsandbytes 0.43.1 certifi 2024.6.2 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 comm 0.2.2 contourpy 1.2.1 crcmod 1.7 cryptography 42.0.8 cycler 0.12.1 datasets 2.20.0 debugpy 1.8.1 decorator 5.1.1 dill 0.3.8 dnspython 2.6.1 einops 0.8.0 email_validator 2.1.2 exceptiongroup 1.2.1 executing 2.0.1 fastapi 0.111.0 fastapi-cli 0.0.4 ffmpy 0.3.2 filelock 3.15.1 fonttools 4.53.0 frozenlist 1.4.1 fsspec 2024.5.0 gast 0.6.0 gguf 0.6.0 gradio 4.36.1 gradio_client 1.0.1 h11 0.14.0 httpcore 1.0.5 httptools 0.6.1 httpx 0.27.0 huggingface-hub 0.23.4 idna 3.7 importlib_metadata 7.1.0 importlib_resources 6.4.0 intel-extension-for-pytorch 2.1.10+xpu intel-openmp 2024.1.2 ipex-llm 2.1.0b20240723 ipykernel 6.29.4 ipython 8.18.1 ipywidgets 8.1.3 jedi 0.19.1 Jinja2 3.1.4 jmespath 0.10.0 jsonschema 4.22.0 jsonschema-specifications 2023.12.1 jupyter_client 8.6.2 jupyter_core 5.7.2 jupyterlab_widgets 3.0.11 kiwisolver 1.4.5 kornia 0.7.3 kornia_rs 0.1.5 latex2mathml 3.77.0 Markdown 3.6 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.0 matplotlib-inline 0.1.7 mdtex2html 1.3.0 mdurl 0.1.2 modelscope 1.11.0 mpmath 1.3.0 multidict 6.0.5 multiprocess 0.70.16 nest-asyncio 1.6.0 networkx 3.2.1 numpy 1.26.4 omegaconf 2.3.0 orjson 3.10.5 oss2 2.18.6 packaging 24.1 pandas 2.2.2 parso 0.8.4 pexpect 4.9.0 pillow 10.3.0 pip 24.0 platformdirs 4.2.2 prompt_toolkit 3.0.47 protobuf 4.21.0 psutil 6.0.0 ptyprocess 0.7.0 pure-eval 0.2.2 py-cpuinfo 9.0.0 pyarrow 17.0.0 pyarrow-hotfix 0.6 pycparser 2.22 pycryptodome 3.20.0 pydantic 2.7.4 pydantic_core 2.18.4 pydub 0.25.1 Pygments 2.18.0 pyparsing 3.1.2 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 python-multipart 0.0.9 pytz 2024.1 PyYAML 6.0.1 pyzmq 26.0.3 referencing 0.35.1 regex 2024.5.15 requests 2.32.3 rich 13.7.1 rpds-py 0.18.1 ruff 0.4.9 safetensors 0.4.3 scipy 1.13.1 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 69.5.1 shellingham 1.5.4 simplejson 3.19.2 six 1.16.0 sniffio 1.3.1 sortedcontainers 2.4.0 spandrel 0.3.4 stack-data 0.6.3 starlette 0.37.2 sympy 1.12.1 tabulate 0.9.0 tiktoken 0.7.0 timm 1.0.7 tokenizers 0.15.2 tomli 2.0.1 tomlkit 0.12.0 toolz 0.12.1 torch 2.1.0a0+cxx11.abi torchsde 0.2.6 torchvision 0.16.0a0+cxx11.abi tornado 6.4.1 tqdm 4.66.4 traitlets 5.14.3 trampoline 0.1.2 transformers 4.36.2 transformers-stream-generator 0.0.5 typer 0.12.3 typing_extensions 4.12.2 tzdata 2024.1 ujson 5.10.0 urllib3 2.2.2 uvicorn 0.30.1 uvloop 0.19.0 viola 0.3.8 watchfiles 0.22.0 wcwidth 0.2.13 websockets 11.0.3 wheel 0.43.0 widgetsnbextension 4.0.11 xxhash 3.4.1 yapf 0.40.2 yarl 1.9.4 zipp 3.19.2

johnysh avatar Jul 24 '24 08:07 johnysh

It's caused by the mismatch of transformers and latest glm4. The glm4 is updated 9 days ago, the line number in error message shows you are using the latest version of modeling_chatglm.py file, who required 4.42.4. You can change to the old modeling_chatglm.py and config.json, who required 4.30.2. Or, you can upgrade your transfromers version to 4.42.4(we will test this later).

qiuxin2012 avatar Jul 25 '24 02:07 qiuxin2012

@JinBridger Please help to test the 4.42.4 and update our example's README.

qiuxin2012 avatar Jul 25 '24 02:07 qiuxin2012

It's caused by the mismatch of transformers and latest glm4. The glm4 is updated 9 days ago, the line number in error message shows you are using the latest version of modeling_chatglm.py file, who required 4.42.4. You can change to the old modeling_chatglm.py and config.json, who required 4.30.2. Or, you can upgrade your transfromers version to 4.42.4(we will test this later).

the transformers 4.42.4 still make the error

jjzhu0579 avatar Aug 05 '24 11:08 jjzhu0579

change to the old modeling_chatglm.py and config.json

how can i change to the old modeling_chatglm.py and config.json,this latest modeling_chatglm.py make too many errors

jjzhu0579 avatar Aug 05 '24 12:08 jjzhu0579