LLaVA
LLaVA copied to clipboard
[BUG] RuntimeError when running inference on ScienceQA
When did you clone our code?
I cloned the code base after 5/1/23
Describe the issue
Issue:
Command:
python -m llava.eval.model_vqa_science \
--model-name /path/to/LLaVA-13b-v0-science_qa \
--question-file /path/to/ScienceQA/data/scienceqa/llava_test.json \
--image-folder /path/to/ScienceQA/data/scienceqa/images/test \
--answers-file vqa/results/ScienceQA/test_llava-13b.jsonl \
--answer-prompter
--conv-mode simple
Log:
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
Screenshots:
You may attach screenshots if it better explains the issue.
Hi, you may try reinstalling PyTorch with CUDA 11.7, with the commands this user from our community provides: https://github.com/haotian-liu/LLaVA/issues/123#issuecomment-1539434115.