Video-LLaMA icon indicating copy to clipboard operation
Video-LLaMA copied to clipboard

RuntimeError: Internal: could not parse ModelProto from ../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/tokenizer.model

Open hyun95roh opened this issue 1 year ago • 0 comments
trafficstars

Greetings. I want to ask what should I do for troubleshooting. Here is the error message:

RuntimeError: Internal: could not parse ModelProto from ../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/tokenizer.model

Environment settings:

  • WSL2 Ubuntu (24.04)
  • GPU: RTX4060
  • nvcc --version ; cuda 12.0
  • torch==1.12.1, torchaudio ==0.12.1, torchvision==0.13.1

Full error message:

(videollama) root@Roh:~/vscode/Video-LLaMA# python demo_audiovideo.py     --cfg-path eval_configs/video_llama_eval_withaudio.yaml     --model_type llama_v2
/root/anaconda3/envs/videollama/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms.functional' module instead.
  warnings.warn(
/root/anaconda3/envs/videollama/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:25: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in 0.14. Please use the 'torchvision.transforms' module instead.
  warnings.warn(
Initializing Chat
Loading VIT
Loading VIT Done
Loading Q-Former
Traceback (most recent call last):
  File "/root/vscode/Video-LLaMA/demo_audiovideo.py", line 67, in <module>
    model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
  File "/root/vscode/Video-LLaMA/video_llama/models/video_llama.py", line 574, in from_config
    model = cls(
  File "/root/vscode/Video-LLaMA/video_llama/models/video_llama.py", line 122, in __init__
    self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False)
  File "/root/anaconda3/envs/videollama/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
    return cls._from_pretrained(
  File "/root/anaconda3/envs/videollama/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/root/anaconda3/envs/videollama/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 96, in __init__
    self.sp_model.Load(vocab_file)
  File "/root/anaconda3/envs/videollama/lib/python3.9/site-packages/sentencepiece/__init__.py", line 961, in Load
    return self.LoadFromFile(model_file)
  File "/root/anaconda3/envs/videollama/lib/python3.9/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from ../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/tokenizer.model

pip list (sry, it's quite long!) : accelerate 0.16.0 aiofiles 24.1.0 aiohttp 3.8.4 aiosignal 1.3.1 altair 5.4.1 antlr4-python3-runtime 4.9.3 anyio 4.6.2.post1 argon2-cffi 23.1.0 argon2-cffi-bindings 21.2.0 arrow 1.3.0 asttokens 2.4.1 async-lru 2.0.4 async-timeout 4.0.2 attrs 22.2.0 av 13.1.0 babel 2.16.0 beautifulsoup4 4.12.3 bitsandbytes 0.37.0 bleach 6.2.0 blis 0.7.11 braceexpand 0.1.7 Brotli 1.0.9 catalogue 2.0.10 cchardet 2.1.7 certifi 2024.8.30 cffi 1.17.1 chardet 5.1.0 charset-normalizer 3.3.2 click 8.1.7 comm 0.2.2 confection 0.1.5 contourpy 1.0.7 cycler 0.11.0 cymem 2.0.8 debugpy 1.8.8 decorator 5.1.1 decord 0.6.0 defusedxml 0.7.1 docker-pycreds 0.4.0 einops 0.8.0 exceptiongroup 1.2.2 executing 2.1.0 fastapi 0.115.5 fastjsonschema 2.20.0 ffmpy 0.4.0 filelock 3.9.0 fonttools 4.38.0 fqdn 1.5.1 frozenlist 1.3.3 fsspec 2024.10.0 ftfy 6.3.1 fvcore 0.1.5.post20221221 gitdb 4.0.11 GitPython 3.1.43 gradio 3.24.1 gradio_client 0.0.8 h11 0.14.0 httpcore 1.0.6 httpx 0.27.2 huggingface-hub 0.13.4 idna 3.7 importlib_metadata 8.4.0 importlib-resources 5.12.0 iopath 0.1.10 ipykernel 6.29.5 ipython 8.18.1 isoduration 20.11.0 jedi 0.19.2 Jinja2 3.1.4 joblib 1.4.2 json5 0.9.28 jsonpointer 3.0.0 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 jupyter_client 8.6.3 jupyter_core 5.7.2 jupyter-events 0.10.0 jupyter-lsp 2.2.5 jupyter_server 2.14.2 jupyter_server_terminals 0.5.3 jupyterlab 4.2.5 jupyterlab_pygments 0.3.0 jupyterlab_server 2.27.3 kiwisolver 1.4.4 langcodes 3.4.1 language_data 1.2.0 linkify-it-py 2.0.3 llvmlite 0.43.0 marisa-trie 1.2.1 markdown-it-py 2.2.0 MarkupSafe 3.0.2 matplotlib 3.7.0 matplotlib-inline 0.1.7 mdit-py-plugins 0.3.3 mdurl 0.1.2 mistune 3.0.2 mkl_fft 1.3.11 mkl_random 1.2.8 mkl-service 2.4.0 mpmath 1.3.0 multidict 6.0.4 murmurhash 1.0.10 narwhals 1.13.5 nbclient 0.10.0 nbconvert 7.16.4 nbformat 5.10.4 nest-asyncio 1.6.0 networkx 3.2.1 nltk 3.9.1 notebook 7.2.2 notebook_shim 0.2.4 numba 0.60.0 numpy 1.26.4 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 omegaconf 2.3.0 openai 0.27.0 opencv-python 4.7.0.72 orjson 3.10.11 overrides 7.7.0 packaging 23.0 pandas 2.2.3 pandocfilters 1.5.1 parameterized 0.9.0 parso 0.8.4 pathlib_abc 0.1.1 pathy 0.11.0 peft 0.5.0 pexpect 4.9.0 pillow 11.0.0 pip 24.2 platformdirs 4.3.6 portalocker 2.10.1 preshed 3.0.9 prometheus_client 0.21.0 prompt_toolkit 3.0.48 protobuf 5.28.3 psutil 5.9.4 ptyprocess 0.7.0 pure_eval 0.2.3 pycocoevalcap 1.2 pycocotools 2.0.6 pycparser 2.22 pydantic 1.10.19 pydub 0.25.1 Pygments 2.18.0 pynndescent 0.5.13 pyparsing 3.0.9 PySocks 1.7.1 python-dateutil 2.8.2 python-json-logger 2.0.7 python-multipart 0.0.17 pytorchvideo 0.1.5 pytz 2024.2 PyYAML 6.0 pyzmq 26.2.0 referencing 0.35.1 regex 2022.10.31 requests 2.32.3 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 rpds-py 0.21.0 safetensors 0.4.5 scikit-learn 1.2.2 scipy 1.10.1 semantic-version 2.10.0 Send2Trash 1.8.3 sentence-transformers 2.2.2 sentencepiece 0.2.0 sentry-sdk 2.18.0 setproctitle 1.3.3 setuptools 75.1.0 six 1.16.0 smart-open 6.4.0 smmap 5.0.1 sniffio 1.3.1 soupsieve 2.6 spacy 3.5.1 spacy-legacy 3.0.12 spacy-loggers 1.0.5 srsly 2.4.8 stack-data 0.6.3 starlette 0.41.2 sympy 1.13.1 tabulate 0.9.0 tenacity 8.2.2 termcolor 2.5.0 terminado 0.18.1 thinc 8.1.12 threadpoolctl 3.5.0 timm 0.6.13 tinycss2 1.4.0 tokenizers 0.13.2 tomli 2.1.0 torch 1.12.1 torchaudio 0.12.1 torchvision 0.13.1 tornado 6.4.1 tqdm 4.64.1 traitlets 5.14.3 transformers 4.28.0 triton 3.1.0 typer 0.7.0 types-python-dateutil 2.9.0.20241003 typing_extensions 4.11.0 tzdata 2024.2 uc-micro-py 1.0.3 umap-learn 0.5.7 uri-template 1.3.0 urllib3 2.2.3 uvicorn 0.32.0 wandb 0.18.7 wasabi 1.1.3 wcwidth 0.2.13 webcolors 24.11.1 webdataset 0.2.48 webencodings 0.5.1 websocket-client 1.8.0 websockets 14.1 wheel 0.44.0 yacs 0.1.8 yarl 1.8.2 zipp 3.14.0

hyun95roh avatar Nov 15 '24 05:11 hyun95roh