BIPIA icon indicating copy to clipboard operation
BIPIA copied to clipboard

vllm version

Open ShiJiawenwen opened this issue 6 months ago • 4 comments

Hello, can you tell me the version of vllm? I have tried many versions, it has some errors when I try to run the code

ShiJiawenwen avatar May 20 '25 13:05 ShiJiawenwen

Could you try using vllm == 0.2.0?

However, I don't believe the vLLM version is the root cause of your issue. Would you mind sharing the complete error message and traceback for better troubleshooting?

In the meantime, I suggest testing your vLLM installation by loading a model through the basic LLM class. For example:

from vllm import LLM

model = LLM("meta-llama/Llama-2-7b-hf")  # or any other model you have access to
output = model.generate("Hello, ")
print(output)

yjw1029 avatar May 20 '25 14:05 yjw1029

""" python run.py Traceback (most recent call last): File "/home/shijiawen/codebase/BIPIA/examples/run.py", line 24, in from bipia.model import AutoLLM File "/home/shijiawen/codebase/BIPIA/bipia/model/init.py", line 10, in from .llama import ( File "/home/shijiawen/codebase/BIPIA/bipia/model/llama.py", line 8, in from vllm import LLM File "/home/shijiawen/anaconda3/lib/python3.11/site-packages/vllm/init.py", line 3, in from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/home/shijiawen/anaconda3/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 6, in from vllm.config import (CacheConfig, ModelConfig, ParallelConfig, File "/home/shijiawen/anaconda3/lib/python3.11/site-packages/vllm/config.py", line 8, in from vllm.utils import get_cpu_memory File "/home/shijiawen/anaconda3/lib/python3.11/site-packages/vllm/utils.py", line 8, in from vllm import cuda_utils ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory """"

The above is the error message. I am trying to test the multi-turns defense. I am studying the impact of multi-turn defense on prompt injection. Could you please provide a detailed example of a multi-turn defense to prompt injection? I've carefully read your paper, but in order to successfully reproduce the experiments, I would need more information on how to construct the multi-turn dataset. Thank you very much for your help!

ShiJiawenwen avatar May 20 '25 14:05 ShiJiawenwen

The error ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory indicates an issue with your local CUDA 11.0 environment configuration. This isn't a bug within bipia or vllm themselves. vllm relies on a correctly set up CUDA environment to function.

Before running run.py, please verify that your CUDA installation is working correctly. A good first step is to ensure the above vllm "hello" example can execute successfully on your system.

Regarding the construction of the multi-turn dataset, lines 140-150 in examples/run.py are key:

processed_datasets = processed_datasets.map(
            partial(
                llm.process_fn,
                prompt_construct_fn=partial(
                    pia_builder.construct_prompt,
                    require_system_prompt=llm.require_system_prompt,
                    ign_guidance=(
                        IGN_GUIDANCES[args.dataset_name]
                        if args.add_ign_guidance
                        else ""
                    ),
                ),
            ),
            # remove_columns=DATA_INFO[args.dataset_name],
            desc="Processing Indirect PIA datasets.",
        )

For your use case, ensure you set add_ign_guidance=True and require_system_prompt=True when configuring your run.

yjw1029 avatar May 20 '25 15:05 yjw1029

Thanks a lot! Can the construction of a multi-turn defense be understood as having the system_prompt_template as the first turn, and the user_prompt_template[0] as the second turn?

ShiJiawenwen avatar May 20 '25 15:05 ShiJiawenwen