VLLM and openai api capability support
Problem Description
Hi is there any support to openai api capability support provide by vllm i want test some models with browser use like qwen-vl model the only way i found os inference with vlm models vllm serve and connect browser-use to open it currenly after few step i get error like this Attempted to assign 1794 = 1794 multimodal tokens to 0 placeholders and vllm crash best regards
Proposed Solution
add browser-use support openapi capability support provide by vllm
Alternative Solutions
No response
Additional Context
No response
I am also waiting it to support vLLM API endpoint.
mark here +1
+1
+1
+1
@devops724 You can set up chatopenai and then configure the relevant information of vllm. It can be used in actual tests
import asyncio
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from browser_use import Agent
# Initialize the model
llm = ChatOpenAI(
# ADD
base_url="http://192.168.114.114:18080/v1",
# Change
model='Qwen3-4B',
temperature=0.0,
)
task = 'Go to kayak.com and find the cheapest one-way flight from Zurich to San Francisco in 3 weeks.'
# Set it to True or false depending on whether it is a multimodal model
agent = Agent(task=task, llm=llm, use_vision=False)
async def main():
await agent.run()
if __name__ == '__main__':
asyncio.run(main())