VILA
VILA copied to clipboard
Question about the output
This is the output of the model.
python -W ignore llava/eval/run_vila.py
--model-path Efficient-Large-Model/Llama-3-VILA1.5-8b
--conv-mode llama_3
--query "
--image-file "demo_images/av.png"
This is the inference code. Why the output is wrong?
BTW, I have muted the flash attention module.
seems works properly on my side, should not be a issue for flash-attn. Could you make sure you have done a fresh install following readme?
BTW, I have muted the flash attention module.
Hi, can you please explain how did you turn it off? I have V100 and can't run with "flash attention" On. Thanks!
same problem
Close due to inacitivity. We haven't fully tested VILA on old hardware.