Video-LLaMA icon indicating copy to clipboard operation
Video-LLaMA copied to clipboard

[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

Results 75 Video-LLaMA issues
Sort by recently updated
recently updated
newest added
trafficstars

When I change input weights type either using model.half() or dtype = torch.float16/bfloat16, it gets much slower on CPU inferencing.

Greetings. I want to ask what should I do for troubleshooting. Here is the error message: ``` RuntimeError: Internal: could not parse ModelProto from ../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf/tokenizer.model ``` **Environment settings**: - WSL2...

This project contributes with a translated README and documentation in Turkish. As our country is poor and still developing, I wanted to contribute to this project. Accessibility is an important...

the demo in hugging face is no longer working. Can you please fix it? I also tried the model scope one. none of them is no longer working. I hope...

This is a draft issue created to gather more details. Please provide additional information to complete the issue.