outlines icon indicating copy to clipboard operation
outlines copied to clipboard

Support for multi-modal models

Open rlouf opened this issue 1 year ago • 4 comments

Presentation of the new feature

There are more and more accessible multi-modal models out there, such as llava, and constrained generation applies to every auto-regressive text generation model disregarding their input.

Where does it fit in Outlines?

Maybe the most reasonable way would be to let users pass tuples (prompt, image) to the API functions and use multipledispatch to dispatch both on model and prompt. Or create a new MultimodalModel class and only dispatch on the model type like we currently do.

We need to make sure users can't unknowingly shoot themselves in the foot, the MultimodalModal class would make this easy.

My main concern is that we might need to make generator more complex, or duplicate part of it.

Are you willing to open a PR?

Yes, although I'd appreciate if someone else were willing to take the lead. Happy to help with the design.

rlouf avatar Feb 14 '24 08:02 rlouf

Here is what the transformers interface looks like

https://github.com/huggingface/transformers/blob/354775bc5755c4a6c47e008d28f27f8ccdcf8f8f/src/transformers/models/llava/modeling_llava.py#L377-L395

    >>> from PIL import Image
    >>> import requests
    >>> from transformers import AutoProcessor, LlavaForConditionalGeneration

    >>> model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf")
    >>> processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")

    >>> prompt = "<image>\nUSER: What's the content of the image?\nASSISTANT:"
    >>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
    >>> image = Image.open(requests.get(url, stream=True).raw)

    >>> inputs = processor(text=prompt, images=image, return_tensors="pt")

    >>> # Generate
    >>> generate_ids = model.generate(**inputs, max_length=30)
    >>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
    "\nUSER: What's the content of the image?\nASSISTANT: The image features a stop sign on a street corner"

inputs.items() contains input_ids, attention_mask, and pixel_values

I agree regarding complexity, the generator would need to manage pixel_values as well. The main difference would be augmenting the attention mask, which would need to be performed in sequence_generator, as this augmentation is applied with every forward pass. How transformers does it:

https://github.com/huggingface/transformers/blob/354775bc5755c4a6c47e008d28f27f8ccdcf8f8f/src/transformers/models/llava/modeling_llava.py#L430-L433

I propose:

  • We create an abstract LLaVaModel and allow passing an images kwarg to generator.
  • We create a LlavaSequenceGenerator subclass which handles the necessarily logic for multi-modal models. This subclass is to be used when a LLaVaModel is used.
  • We have two options for sequence_generator
    • a. Create a separate llava_sequence_generator which applies the image_features to the attention mask
    • b. We refactor SequenceGenerator to SequenceGenerator.gen_tokens which is overridden by LLaVaSequenceGenerator (I prefer this one, I think unifying the API into one module makes a lot of sense in terms of clearness and composability)

Would love to know your thoughts!

lapp0 avatar Feb 14 '24 18:02 lapp0

Any updates on this thread?

Reichenbachian avatar Mar 08 '24 22:03 Reichenbachian

Hey! I just wanted to know if multimodal models can be used with the connector being implemented in issue #728

Kamakshi8104 avatar Mar 12 '24 07:03 Kamakshi8104

Yes you should be able to use this with multimodal models!

rlouf avatar Mar 12 '24 09:03 rlouf

For anyone who stumbles on this later, check out the relevant cookbook for working with vision models here.

cpfiffer avatar Nov 18 '24 00:11 cpfiffer