miqaP

Results 3 comments of miqaP

Hi, I had the same error when trying to run a model using llama-cpp loader. It is not really clear when reading the documentation but to run model using llama-cpp...

Here is the simplest code I come with, that is running on my machine. It should help to reproduce the error : ```python from outlines import models, generate, outlines from...

Don't know if it helps but when you load a bitsandbytes gemma model, vLLM fallback on the V0 engine which seems to support gemma3, however the behavior is the same,...