OpenAdapt icon indicating copy to clipboard operation
OpenAdapt copied to clipboard

Explore LLaVA

Open abrichr opened this issue 1 year ago • 2 comments

Feature request

How can we take advantage of https://github.com/haotian-liu/LLaVA ?

https://llava-vl.github.io/

Motivation

LLaVA represents a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4 and setting a new state-of-the-art accuracy on Science QA.

abrichr avatar Oct 04 '23 15:10 abrichr

https://huggingface.co/papers/2311.05437

LLaVA-Plus is a general-purpose multimodal assistant that expands the capabilities of large multimodal models. It maintains a skill repository of pre-trained vision and vision-language models and can activate relevant tools based on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on multimodal instruction-following data to acquire the ability to use tools, covering visual understanding, generation, external knowledge retrieval, and compositions. Empirical results show that LLaVA-Plus outperforms LLaVA in existing capabilities and exhibits new ones. It is distinct in that the image query is directly grounded and actively engaged throughout the entire human-AI interaction sessions, significantly improving tool use performance and enabling new scenarios.

abrichr avatar Nov 11 '23 20:11 abrichr

https://huggingface.co/SkunkworksAI/BakLLaVA-1

BakLLaVA 1 is a Mistral 7B base augmented with the LLaVA 1.5 architecture. In this first version, we showcase that a Mistral 7B base outperforms Llama 2 13B on several benchmarks. BakLLaVA 2 is cooking with a significantly larger (commercially viable) dataset and a novel architecture that expands beyond the current LLaVA method. BakLLaVA-2 will do away with the restrictions of BakLLaVA-1.

abrichr avatar Nov 12 '23 22:11 abrichr