Results 187 comments of Tom Dörr

Doesn't happen when I switch the model to `astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit` ``` python3 -m vllm.entrypoints.openai.api_server --model astronomer/Llama-3-8B-Instruct-GPTQ-8-Bit --quantization gptq --tensor-parallel-size 1 --port 38242 --gpu-memory-utilization 0.8 --dtype float16 ```

Now I'm getting a `BadRequestError` again. Maybe the vllm server just blocked me because I was senden that many bad request errors earlier. ``` Creating basic bootstrap: 1/9 2%|▊ |...

Getting the same "Not found" error again, but only after the MIPRO bootstraping phase.

Similar issue: https://github.com/stanfordnlp/dspy/issues/1011

You could use my docker setup to avoid installation issues: https://github.com/tom-doerr/TecoGAN

Could you send the server log? I think I got that error when it didn't finish loading the model.

Do you mean running the code in this repo failed?

@lopugit It worked for you, right?

Thank you, that should improve usefulness a lot