rkilchmn

Results 6 comments of rkilchmn

you can also set the environment variable OPENAI_API_BASE: export OPENAI_API_BASE=http://someinternalhost.local/v1

It would be great if someone could post how exactly settings.yaml section prompt_style needs to look for llama3!

I have exact same issue using Ubuntu 22.04 in WSL2 on Windows 11. I have a laptop Gen11 CPU with Gen12 GPU and openVino installed ~/whisper.cpp$ ./build/bin/main -m models/ggml-base.en-encoder-openvino.bin -f...

I found solution here: [https://github.com/ggerganov/whisper.cpp/pull/1694#issuecomment-1870984510] > You only need to provide the path of the standard model to the main. Ensure that both the standard model and the OPENVINO encoder...

I found this page: [https://trypear.ai/blog/wsl-setup](https://trypear.ai/blog/wsl-setup) (see also #123 ) but also get an error when doing WSL connect: ![image](https://github.com/user-attachments/assets/e2348528-bccb-46ab-bf6b-af40bfa4c809)

In the code there is this additional linter instruction: import oneccl_bindings_for_pytorch # noqa: F401 # type: ignore Meaning: Ignore F401, which is: "module imported but unused" Why is it included...