llama.cpp
llama.cpp copied to clipboard
Compile bug: Error while compile the llama.cpp - libggml-cuda.so: undefined reference to `log2f@GLIBC_2.27'
Problem Description
Hi is there any support to openai api capability support provide by vllm i want test some models with browser use like qwen-vl model the only way i found os inference with vlm models vllm serve and connect browser-use to open it currenly after few step i get error like this Attempted to assign 1794 = 1794 multimodal tokens to 0 placeholders and vllm crash best regards
Proposed Solution
add browser-use support openapi capability support provide by vllm
Alternative Solutions
No response
Additional Context
No response