autogen-ui icon indicating copy to clipboard operation
autogen-ui copied to clipboard

Ability to run locally

Open iplayfast opened this issue 8 months ago • 4 comments

I'm able to run autogen using a config list like config_list = [ { "model": "mistral-7b-instruct-v0.1.Q5_0.gguf",#"mistral-instruct-7b", #t he name of your running model "api_base": "http://0.0.0.0:5001/v1", #the local address of the api "api_type": "open_ai", "api_key": "sk-111111111111111111111111111111111111111111111111", # just a placeholder } ]

Which talks to text-generation-Webui which has the openai api emulation turned on. It would be nice to have a similar way of using a local llm with autogen-ui

iplayfast avatar Oct 19 '23 07:10 iplayfast

Hey Are you trying to use Runpods here? I dont have any error with this

hieuminh65 avatar Oct 20 '23 17:10 hieuminh65

Maybe I'm just not doing it right, but I wasn't able to get it to work.

iplayfast avatar Oct 20 '23 22:10 iplayfast

Recording of the autogenui. Main steps

  • [optional] create a new conda environment
  • pip install autogenui (or you can install from source on github ... clone repo ... pip install -e .)
  • autogenui --port 8081 (from command line). This spins up the ui.
  • open ui in browser http://localhost:8081/ .. and chat with the default 2 agent workflow.

https://github.com/victordibia/autogen-ui/assets/1547007/d560957c-7e13-47a7-a80d-da75695217bd

victordibia avatar Nov 05 '23 02:11 victordibia

Recording of the autogenui. Main steps

The op issue is about how to run autogen-ui with a local llm service, not how to run autogen-ui locally. I don't see instructions for how to configure use of a local llm, or any llm service besides ChatGPT.

dlaliberte avatar Dec 17 '23 05:12 dlaliberte