LLaVA-NeXT
LLaVA-NeXT copied to clipboard
When using LLaVA-Llama3 for batch inference with the generate function, the results are incorrect.
When I use the generate function with batched input, the results are inconsistent compared to when the batch size is 1. There will be many empty strings sometimes.
# batch inference
batch_outputs = self.model.generate(
inputs=input_ids_list[i:i+batch_size],
images = images[i:i+batch_size],
image_sizes = [(336,336)] * input_ids_list[i:i+batch_size].shape[0],
attention_mask=attention_mask[i:i+batch_size],
**generation_config)
# single inference
single_outputs = self.model.generate(
inputs=input_ids_list[0:1],
images = images[0:1],
image_sizes = [(336,336)],
attention_mask=attention_mask[0:1],
**generation_config)
The batch inference results are correct when working with pure text. What could be the cause of this issue, and how can I resolve it? Thank you!
I had this exact same issue. The main problem stems from the internal tokenizer padding on the right, thus using the pad tokens to generate the output. You can fix this by manually changing the tokenizer_padding_side in the model:
overwrite_config = {"tokenizer_padding_side": "left"}
llava_tokenizer, llava_model, llava_image_processor, llava_max_length = load_pretrained_model(..., overwrite_config=overwrite_config)
I created an issue so that this issue can hopefully be fixed: https://github.com/LLaVA-VL/LLaVA-NeXT/issues/398