autogen-ui
autogen-ui copied to clipboard
Ability to run locally
I'm able to run autogen using a config list like config_list = [ { "model": "mistral-7b-instruct-v0.1.Q5_0.gguf",#"mistral-instruct-7b", #t he name of your running model "api_base": "http://0.0.0.0:5001/v1", #the local address of the api "api_type": "open_ai", "api_key": "sk-111111111111111111111111111111111111111111111111", # just a placeholder } ]
Which talks to text-generation-Webui which has the openai api emulation turned on. It would be nice to have a similar way of using a local llm with autogen-ui
Hey Are you trying to use Runpods here? I dont have any error with this
Maybe I'm just not doing it right, but I wasn't able to get it to work.
Recording of the autogenui. Main steps
- [optional] create a new conda environment
- pip install autogenui (or you can install from source on github ... clone repo ... pip install -e .)
- autogenui --port 8081 (from command line). This spins up the ui.
- open ui in browser http://localhost:8081/ .. and chat with the default 2 agent workflow.
https://github.com/victordibia/autogen-ui/assets/1547007/d560957c-7e13-47a7-a80d-da75695217bd
Recording of the autogenui. Main steps
The op issue is about how to run autogen-ui with a local llm service, not how to run autogen-ui locally. I don't see instructions for how to configure use of a local llm, or any llm service besides ChatGPT.