vllm icon indicating copy to clipboard operation
vllm copied to clipboard

Add LLaVA support

Open LinkerCodeMonkey opened this issue 1 year ago • 2 comments

We added code to support llava. #307

test code:

from vllm import MLLM, SamplingParams
prompts = [
    "what is doing the man",
    "what you name",
    "what can I do for you",
    "what is doing the man",
】
images = [{
    "src_type": "url",
    "image_src": "IMAGE_URL"}]*4

sampling_params = SamplingParams(temperature=0.8, top_p=0.5, max_tokens=1024)
model,tokenizer = "/PATH/LLaVA-13b-delta-v1-1", "/PATH/LLaVA-13b-delta-v1-1"
gpu_memory_utilization = 0.9
mllm = MLLM(model=model,tokenizer=tokenizer, gpu_memory_utilization=gpu_memory_utilization)
outputs = mllm.generate(prompts, images, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

LinkerCodeMonkey avatar Aug 17 '23 02:08 LinkerCodeMonkey

Thanks. Is it working with LLaVA1.5?

teraktor2006 avatar Oct 08 '23 22:10 teraktor2006

@LinkerCodeMonkey do you still plan to work on this PR?

hmellor avatar Mar 28 '24 14:03 hmellor

Closed as we added support for LLaVA in #3042

WoosukKwon avatar Apr 12 '24 07:04 WoosukKwon