Domino9752

Results 4 comments of Domino9752

After getting this to work it is clear to me that this is an issue with Litellm, and not open-interpreter. Basically oobabooga modified their API a couple weeks ago and...

Say Hey- I added your code to my venv, and when running your example I received this error: Traceback (most recent call last): File "d:\Foundary\Gemma-3_llama.py", line 57, in output =...

kossum- Thanks for your help. I changed over to your branch: python -m pip install git+https://github.com/kossum/llama-cpp-python@gemma3-fix --no-cache-dir --force-reinstall --upgrade --config-settings="cmake.args=-DGGML_CUDA=on" ...and now it runs. I previously had the syntax shown...

Reference: https://docs.continue.dev/customize/model-providers/more/textgenwebui This code is all pretty dynamic, as my continue code updated again just a few days ago. I've just starting to use continue, this is (mostly) what Gemini-flash...