Anton Solbjørg
Anton Solbjørg

> Figure out how to get OI to answer to user input requests like python's input(). Do we somehow detect a delay in the output..? Is there some universal flag...
Dont add `-l` flag. it overwrites api_base Once all the nessesary parameters in config is set, just use `interpreter` and it will load the flags from the config file
You need to start Oobabooga with openai compatible api, se the link for more info: https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API
I see, I shall work on a PR to fix this... We used to run Oobabooga with Open-interpreter using ooba before, but it was a buggy mess
Can you try to add `--model openai/local`
Hey, could you try this PR? Assuming you know your way around git... https://github.com/KillianLucas/open-interpreter/pull/955 After installing run interpreter with these flags: ``` -m Oobabooga/modelname # Dont know if you need...
Install from PR: ``` pip uninstall open-interpreter pip install git pip install git+https://github.com/Notnaton/open-interpreter.git@litellm-custom-provider ``` I have updated it, now Oogabooga should work I hope
Tested on windows, updated it to upstream/main
When I run llama.cpp I can get 100Tok/sec on 7B model Seems interpreter have a max speed of 20tok/sec as there is little difference between 7B and a bigger model....