CogAgent icon indicating copy to clipboard operation
CogAgent copied to clipboard

VLLM支持问题

Open Mars-1990 opened this issue 9 months ago • 2 comments

System Info / 系統信息

cuda==12.4

Who can help? / 谁可以帮助到您?

@sixsixcoder

Information / 问题信息

  • [x] The official example scripts / 官方的示例脚本
  • [x] My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

使的脚本如下,图片为app_1.png

` from PIL import Image from vllm import LLM, SamplingParams import os import torch.distributed as dist

os.environ["CUDA_VISIBLE_DEVICES"]="3"

model_name = "/data/models/cogagent-9b-20241220"

def procress_inputs(): task = "搜索并选择 icloud\n" platform_str = "(Platform: Mac)\n" history_str = "\nHistory steps: " format_str = "(Answer in Action-Operation-Sensitive format.)" prompt = f"Task: {task}{history_str}\n{platform_str}{format_str}" return prompt

llm = LLM(model=model_name, tensor_parallel_size=1, max_model_len=8192, trust_remote_code=True, enforce_eager=True, # hf_overrides={"architectures": ["GLM4VForCausalLM"]} ) stop_token_ids = [151329, 151336, 151338] sampling_params = SamplingParams(temperature=0.2, max_tokens=1024, stop_token_ids=stop_token_ids)

prompt = procress_inputs() image = Image.open("/data/codes/CogAgent/img/app_1.png").convert('RGB') inputs = { "prompt": prompt, "multi_modal_data": {"image": image} } outputs = llm.generate(inputs, sampling_params=sampling_params)

for o in outputs: generated_text = o.outputs[0].text print(generated_text)

dist.destroy_process_group() `

使用python==3.11,vllm==0.7.3, 打开hf_overrides,出现错误,信息如下

[rank0]: Traceback (most recent call last): [rank0]: File "/data/codes/CogAgent/app/vllm_demo.py", line 38, in [rank0]: outputs = llm.generate(inputs, sampling_params=sampling_params) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/utils.py", line 1057, in inner [rank0]: return fn(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 461, in generate [rank0]: self._validate_and_add_requests( [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1330, in _validate_and_add_requests [rank0]: self._add_request( [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1348, in _add_request [rank0]: self.llm_engine.add_request( [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/utils.py", line 1057, in inner [rank0]: return fn(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 756, in add_request [rank0]: preprocessed_inputs = self.input_preprocessor.preprocess( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/inputs/preprocess.py", line 762, in preprocess [rank0]: return self._process_decoder_only_prompt( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/inputs/preprocess.py", line 711, in _process_decoder_only_prompt [rank0]: prompt_comps = self._prompt_to_llm_inputs( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/inputs/preprocess.py", line 365, in _prompt_to_llm_inputs [rank0]: return self._process_multimodal( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/inputs/preprocess.py", line 273, in _process_multimodal [rank0]: return mm_processor.apply(prompt, mm_data, mm_processor_kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1275, in apply [rank0]: self._validate_mm_placeholders(mm_placeholders, mm_item_counts) [rank0]: File "/data/miniforge3/envs/cog-vllm-2/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1186, in _validate_mm_placeholders [rank0]: raise RuntimeError( [rank0]: RuntimeError: Expected there to be 1 prompt replacements corresponding to 1 image items, but instead found 0 prompt replacements! Either the prompt text has missing/incorrect tokens for multi-modal inputs, or there is a problem with your implementation of merged multi-modal processor for this model (usually arising from an inconsistency between _call_hf_processor and _get_prompt_replacements).

使用python==3.10, vllm==0.6.6时 可正常运行,但输出为空

Image

Expected behavior / 期待表现

Action: 打开浏览器并访问https://www.apple.com,以便进行后续操作。 Grounded Operation: LAUNCH(app='Apple', url='https://www.apple.com') <<一般操作>>

Mars-1990 avatar Mar 17 '25 08:03 Mars-1990

transformers版本在0.46.0以上吗?需要看一下你的pip list

sixsixcoder avatar Mar 18 '25 03:03 sixsixcoder

@sixsixcoder

vllm ==0.6.6 python==3.10环境

aiohappyeyeballs 2.5.0 aiohttp 3.11.13 aiohttp-cors 0.7.0 aiosignal 1.3.2 airportsdata 20250224 annotated-types 0.7.0 anyio 4.8.0 astor 0.8.1 async-timeout 5.0.1 attrs 25.1.0 bitsandbytes 0.45.3 blake3 1.0.4 cachetools 5.5.2 certifi 2025.1.31 charset-normalizer 3.4.1 click 8.1.8 cloudpickle 3.1.1 colorful 0.5.6 compressed-tensors 0.8.1 depyf 0.18.0 dill 0.3.9 diskcache 5.6.3 distlib 0.3.9 distro 1.9.0 einops 0.8.1 exceptiongroup 1.2.2 fastapi 0.115.11 filelock 3.17.0 frozenlist 1.5.0 fsspec 2025.3.0 gguf 0.10.0 google-api-core 2.24.2 google-auth 2.38.0 googleapis-common-protos 1.69.1 grpcio 1.71.0 h11 0.14.0 httpcore 1.0.7 httptools 0.6.4 httpx 0.28.1 huggingface-hub 0.29.2 idna 3.10 importlib_metadata 8.6.1 interegular 0.3.3 Jinja2 3.1.6 jiter 0.9.0 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 lark 1.2.2 lm-format-enforcer 0.10.11 loguru 0.7.3 MarkupSafe 3.0.2 mistral_common 1.5.3 mpmath 1.3.0 msgpack 1.1.0 msgspec 0.19.0 multidict 6.1.0 nest-asyncio 1.6.0 networkx 3.4.2 numpy 1.26.4 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-ml-py 12.570.86 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 openai 1.65.5 opencensus 0.11.4 opencensus-context 0.1.3 opencv-python-headless 4.11.0.86 outlines 0.1.11 outlines_core 0.1.26 packaging 24.2 partial-json-parser 0.2.1.1.post5 pillow 11.1.0 pip 25.0.1 platformdirs 4.3.6 prometheus_client 0.21.1 prometheus-fastapi-instrumentator 7.0.2 propcache 0.3.0 proto-plus 1.26.1 protobuf 5.29.3 psutil 7.0.0 py-cpuinfo 9.0.0 py-spy 0.4.0 pyasn1 0.6.1 pyasn1_modules 0.4.1 pycountry 24.6.1 pydantic 2.10.6 pydantic_core 2.27.2 python-dotenv 1.0.1 PyYAML 6.0.2 pyzmq 26.2.1 ray 2.43.0 referencing 0.36.2 regex 2024.11.6 requests 2.32.3 rpds-py 0.23.1 rsa 4.9 safetensors 0.5.3 sentencepiece 0.2.0 setuptools 75.8.2 six 1.17.0 smart-open 7.1.0 sniffio 1.3.1 starlette 0.46.1 sympy 1.13.1 tiktoken 0.9.0 tokenizers 0.21.0 torch 2.5.1 torchaudio 2.5.1 torchvision 0.20.1 tqdm 4.67.1 transformers 4.47.0 triton 3.1.0 typing_extensions 4.12.2 urllib3 2.3.0 uvicorn 0.34.0 uvloop 0.21.0 virtualenv 20.29.3 vllm 0.6.6 watchfiles 1.0.4 websockets 15.0.1 wheel 0.45.1 wrapt 1.17.2 xformers 0.0.28.post3 xgrammar 0.1.15 yarl 1.18.3 zipp 3.21.0

vllm ==0.7.3 python==3.11环境

aiohappyeyeballs 2.6.1 aiohttp 3.11.14 aiosignal 1.3.2 airportsdata 20250224 annotated-types 0.7.0 anyio 4.9.0 astor 0.8.1 attrs 25.3.0 blake3 1.0.4 certifi 2025.1.31 charset-normalizer 3.4.1 click 8.1.8 cloudpickle 3.1.1 compressed-tensors 0.9.1 cupy-cuda12x 13.4.0 depyf 0.18.0 dill 0.3.9 diskcache 5.6.3 distro 1.9.0 dnspython 2.7.0 einops 0.8.1 email_validator 2.2.0 fastapi 0.115.11 fastapi-cli 0.0.7 fastrlock 0.8.3 filelock 3.18.0 frozenlist 1.5.0 fsspec 2025.3.0 gguf 0.10.0 h11 0.14.0 httpcore 1.0.7 httptools 0.6.4 httpx 0.28.1 huggingface-hub 0.29.3 idna 3.10 importlib_metadata 8.6.1 iniconfig 2.0.0 interegular 0.3.3 Jinja2 3.1.6 jiter 0.9.0 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 lark 1.2.2 llvmlite 0.43.0 lm-format-enforcer 0.10.11 markdown-it-py 3.0.0 MarkupSafe 3.0.2 mdurl 0.1.2 mistral_common 1.5.4 mpmath 1.3.0 msgpack 1.1.0 msgspec 0.19.0 multidict 6.1.0 nest-asyncio 1.6.0 networkx 3.4.2 numba 0.60.0 numpy 1.26.4 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 openai 1.66.3 opencv-python-headless 4.11.0.86 outlines 0.1.11 outlines_core 0.1.26 packaging 24.2 partial-json-parser 0.2.1.1.post5 pillow 11.1.0 pip 25.0.1 pluggy 1.5.0 prometheus_client 0.21.1 prometheus-fastapi-instrumentator 7.0.2 propcache 0.3.0 protobuf 6.30.1 psutil 7.0.0 py-cpuinfo 9.0.0 pybind11 2.13.6 pycountry 24.6.1 pydantic 2.10.6 pydantic_core 2.27.2 Pygments 2.19.1 pytest 8.3.5 python-dotenv 1.0.1 python-multipart 0.0.20 PyYAML 6.0.2 pyzmq 26.3.0 ray 2.40.0 referencing 0.36.2 regex 2024.11.6 requests 2.32.3 rich 13.9.4 rich-toolkit 0.13.2 rpds-py 0.23.1 safetensors 0.5.3 sentencepiece 0.2.0 setuptools 75.8.2 shellingham 1.5.4 sniffio 1.3.1 starlette 0.46.1 sympy 1.13.1 tiktoken 0.9.0 tokenizers 0.21.1 torch 2.5.1 torchaudio 2.5.1 torchvision 0.20.1 tqdm 4.67.1 transformers 4.49.0 triton 3.1.0 typer 0.15.2 typing_extensions 4.12.2 urllib3 2.3.0 uvicorn 0.34.0 uvloop 0.21.0 vllm 0.7.3 watchfiles 1.0.4 websockets 15.0.1 wheel 0.45.1 xformers 0.0.28.post3 xgrammar 0.1.11 yarl 1.18.3 zipp 3.21.0

Mars-1990 avatar Mar 18 '25 06:03 Mars-1990