vllm icon indicating copy to clipboard operation
vllm copied to clipboard

_pickle.UnpicklingError: could not find MARK when loading wizardcoder

Open ishotoli opened this issue 1 year ago • 0 comments

Thanks for fixing #254 . After I updated the code to the latest version, when I executed the following command:

python -m vllm.entrypoints.openai.api_server --model /home/foo/workshop/text-generation-webui/models/WizardLM_WizardCoder-15B-V1.0/

the following error occurred:

INFO 07-03 08:38:16 llm_engine.py:60] Initializing an LLM engine with config: model='/home/foo/workshop/text-generation-webui/models/WizardLM_WizardCoder-15B-V1.0/', tokenizer='/home/foo/workshop/text-generation-webui/models/WizardLM_WizardCoder-15B-V1.0/', tokenizer_mode=auto, dtype=torch.float16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
Traceback (most recent call last):
  File "/home/foo/anaconda3/envs/aigc/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/foo/anaconda3/envs/aigc/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/foo/workshop/vllm/vllm/entrypoints/openai/api_server.py", line 313, in <module>
    engine = AsyncLLMEngine.from_engine_args(engine_args)
  File "/home/foo/workshop/vllm/vllm/engine/async_llm_engine.py", line 212, in from_engine_args
    engine = cls(engine_args.worker_use_ray,
  File "/home/foo/workshop/vllm/vllm/engine/async_llm_engine.py", line 49, in __init__
    self.engine = engine_class(*args, **kwargs)
  File "/home/foo/workshop/vllm/vllm/engine/llm_engine.py", line 97, in __init__
    worker = worker_cls(
  File "/home/foo/workshop/vllm/vllm/worker/worker.py", line 45, in __init__
    self.model = get_model(model_config)
  File "/home/foo/workshop/vllm/vllm/model_executor/model_loader.py", line 49, in get_model
    model.load_weights(
  File "/home/foo/workshop/vllm/vllm/model_executor/models/gpt_bigcode.py", line 226, in load_weights
    for name, loaded_weight in hf_model_weights_iterator(
  File "/home/foo/workshop/vllm/vllm/model_executor/weight_utils.py", line 73, in hf_model_weights_iterator
    state = torch.load(bin_file, map_location="cpu")
  File "/home/foo/anaconda3/envs/aigc/lib/python3.10/site-packages/torch/serialization.py", line 815, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/foo/anaconda3/envs/aigc/lib/python3.10/site-packages/torch/serialization.py", line 1033, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: could not find MARK

ishotoli avatar Jul 03 '23 08:07 ishotoli