aider
aider copied to clipboard
Feature Request: Support Open Source LLM Models & Oobabooga Webui
Currently, this only supports OpenAI models using the OpenAI API. Oobabooga Webui has an API extension that allows requests to be sent to it and open source models to generate the content instead completely locally. Would it be possible to add support for the webui API in the future?
If you have a way to run local models with an OpenAI compatible API, aider should be able to connect to them.
See recent issue #17 and this comment https://github.com/paul-gauthier/aider/issues/20#issuecomment-1606219101
Is there some sort of complete guide on how to actually do this? I am confused as to how to interact with the api via aider although I have interacted with it a number of times with many other similar tools, which for Oobabooga webui is http://localhost:5000/api by default. I combed the repos to try to figure it out, and I added the command line argument --openai-api-base with the link to the local API, but it still wants an openai API key. I have never used the API for OpenAI and do not have an API key for it.
Edit: I got aider to talk to the API, but it looks for http://localhost:5000/api/v1/models however, Oobabooga's API stores models in http://localhost:5000/api/v1/model (without the "s"). I am not too sure why this is the case with the missing "s" and not sure if it is an issue with Oobabooga webui or with Aider.
I finally figured this out! It turns out, there is an extension in the Oobabooga webui called "openai" that appeared to be the thing that needed enabling.
For future people trying to get this to work try the following steps:
- Open oobabooga-text-generation-webui
- Under the "Session" tab (or "Interface" on older versions)
- Check the box under extensions labeled "openai". This will enable the openai compatible api. You do not need to disable the other API extension.
- Apply and restart the UI
- Call aider with
aider --openai-api-base http://localhost:5001/v1 --openai-api-key dummy
If you followed all those steps, you should be able to easily use the webui with Aider.
That's great that you got it working. And thank you for sharing the steps. That should be very helpful to others!
@Sopitive What models did you try and did it generate the code expected?
@Sopitive What models did you try and did it generate the code expected?
I have only tried a handful of models. Mainly Wizard Vicunia 13B and Llama 30B. I have yet to try starcoder or codegen yet. The models I had tried did not seem to generate the code I had expected. Using the default request to generate a snake game yielded very poor results. I am in hopes that the code specific models will perform better when I finally get them installed. There are obviously models like 65B parameter models and even higher that would probably do it, but I would have to wait a while even on a 4090 to get a complete result. If you try it, use one of the 8k llama models and load it with Exllama.
@Sopitive Unfortunately I do not have a GPU to try Exllama but it would be interesting to see if WizardCoder is able to generate decent code we could use.
Some days ago, after a lot of work, I get to test aider with wizardcoder by using this pull request as a branch: https://github.com/oobabooga/text-generation-webui/pull/2892 . I run it with CPU only, using one of the famous "The Bloke" quantized versions provided, and having been activated the "openai" extension of textgen. What I have read is that wizardcoder is almost equals to GPT 3.5 and has a context size of 8k so is very promising...
I also tried using "localai" with docker.... but localai seems to have a bug with this concrete model: I think the problem is that the api served was not returning the remainded context length in the json reponse, and aider fails.
What I found is that the model is refusing to append the filename before "```" despite the fact that the prompt explicitly asserts that... So the conclusion is that each of the llm modesl must need custom prompts for particularize and get working. What I suggest is that the code of aider should be reorganized/prepared and should have each of the differents prompts for llms to be able to switch/adapt new models with easy
Have a look at my reply to this post https://github.com/paul-gauthier/aider/issues/138
FYI, we just added an #llm-integrations channel on the discord, as a place to discuss using aider with alternative or local LLMs.
https://discord.gg/X9Sq56tsaR