vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[New Model]: allenai/Molmo-7B-0-0924 VisionLM

Open K-Mistele opened this issue 1 year ago • 16 comments

The model to consider.

https://huggingface.co/allenai/Molmo-7B-O-0924 https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19

The closest model vllm already supports.

Existing Olmo Models by AllenAi: OLMoForCausalLM and OLMoEForCausalLM are supported.

What's your difficulty of supporting the model you want?

Molmo is a vision LM, so unlike the previous Olmo models by Allen AI, this model includes vision.

Before submitting a new issue...

  • [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

K-Mistele avatar Sep 25 '24 16:09 K-Mistele

+1

Of note, Molmo-72B-0924 seems to be benchmarking as SOTA open source model, beating many closed models. It also performs much better than Llama 3.2 models. It would be great to have this model supported.

More information here:

https://huggingface.co/allenai/Molmo-72B-0924 https://molmo.allenai.org/blog https://molmo.allenai.org/paper.pdf

davidsyoung avatar Sep 26 '24 11:09 davidsyoung

+1 If it's not already done by this weekend, I can try handle it then

galatolofederico avatar Sep 26 '24 12:09 galatolofederico

how to get the onnx version of the model?

sharanks8 avatar Sep 26 '24 14:09 sharanks8

+1 If it's not already done by this weekend, I can try handle it then

how to do it by ourself ?

Gokul10272001 avatar Sep 27 '24 04:09 Gokul10272001

We're a bit overwhelmed by things to work on, so any help/contribution is definitely welcomed! Supporting this model should be straightforward since it's also LlaVA-style like many other VLMs we support today.

If anyone decides to make a PR to support this model, please ping me directly for review once it's ready!

ywang96 avatar Sep 27 '24 05:09 ywang96

Hello, I'm with the Molmo team at Ai2. We'll soon be adding our models to vllm, so stay tuned!

mrsalehi avatar Sep 28 '24 04:09 mrsalehi

Hello, I'm with the Molmo team at Ai2. We'll soon be adding our models to vllm, so stay tuned!

Nice, will that include molmoe?

stellanhaglund avatar Sep 29 '24 10:09 stellanhaglund

Hello, I'm with the Molmo team at Ai2. We'll soon be adding our models to vllm, so stay tuned!

Nice, will that include molmoe?

yes

mrsalehi avatar Sep 29 '24 16:09 mrsalehi

@mrsalehi Thank you! Do you know when approximately?

SinanAkkoyun avatar Sep 29 '24 18:09 SinanAkkoyun

And when will you release the dataset?

SinanAkkoyun avatar Sep 29 '24 22:09 SinanAkkoyun

@mrsalehi Thank you! Do you know when approximately?

Most likely today or tomorrow.

mrsalehi avatar Sep 30 '24 16:09 mrsalehi

does the support included in the release 0.6.2?

premg16 avatar Oct 01 '24 07:10 premg16

does the support included in the release 0.6.2?

@premg16 0.6.2 has already been released, so no, but we will make a new release when this model is supported by vLLM!

ywang96 avatar Oct 01 '24 07:10 ywang96

Very excited for the Molmo integration! Let us know if there's anything we can do to help.

jbohnslav avatar Oct 01 '24 10:10 jbohnslav

https://github.com/vllm-project/vllm/pull/9016#issue

mrsalehi avatar Oct 02 '24 03:10 mrsalehi

@mrsalehi Thank you for the vLLM implementation :)

When will you release the datasets?

SinanAkkoyun avatar Oct 03 '24 09:10 SinanAkkoyun

@mrsalehi will there be a release for molmoe?

stellanhaglund avatar Nov 11 '24 07:11 stellanhaglund