LLaVA-NeXT icon indicating copy to clipboard operation
LLaVA-NeXT copied to clipboard

Results 315 LLaVA-NeXT issues
Sort by recently updated
recently updated
newest added

![image](https://github.com/LLaVA-VL/LLaVA-NeXT/assets/55685981/3c9059ea-95be-41c7-a555-d2ab407f374f)

I'm getting this error in example code of llava-next ``` ImportError Traceback (most recent call last) Cell In[5], line 1 ----> 1 from llava.model.builder import load_pretrained_model 2 from llava.mm_utils import...

Improvement of `copy()` function. This PR is related to #33.

When do `copy()` of `llama_v3` version `Conversation`, `tokenizer` and other attributes are not copied, causing an error at `self.tokenizer.apply_chat_template()` in `get_prompt()`.

from llava.model.builder import load_pretrained_model from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX from llava.conversation import conv_templates, SeparatorStyle from PIL import Image import requests import...

I wonder what the "32K" signifies when using the "lmms-lab/LLaVA-NeXT-Video-7B-32K" checkpoint.

In the file conversation.py, the Llama-3 chat is given by the line 107 self.tokenizer.apply_chat_template(chat_template_messages, tokenize=False, add_generation_prompt=False) which means the token and will be inserted automatically by the chat template of...

Currently, the tokenizer_config is the same as the Llama 3 model, which isn't instructive as to how to pass in images. Adding a very short snippet of code outlining how...

How do we deploy this model via API? Can I deploy it on vLLM or lmdeploy? I can't find any example to run this with HuggingFace transformers. I want to...

HI, dear authors: I noticed lots of researcher are asking for the finetunning code, but since the code v1.6 are mostly the same with v1.5, why not publish the llava-next...