openui
openui copied to clipboard
Add LM Studio Support
I really like this project, but I prefer to run my local models through LM-Studio rather than Ollama. Primarly because of their simple-to-use GUI that tells me exactly, what models can run / what quantitation I need to run these models on my GPU. If you guys have time, please add LM Studio support to this project. Thanks in advance.
I haven't used it, will take a look. Community contributions are also very welcome if anyone feels inspired to dive into the source or ask Devin to do it 😜
https://github.com/wandb/openui/issues/91#issuecomment-2099240831
File server.py
in the folder backend/openui
- line 68
Set the base_url
to whatever you wish.
openai = AsyncOpenAI(
base_url="https://YOUR-URL/v1"
) # AsyncOpenAI(base_url="http://127.0.0.1:11434/v1")
ollama = AsyncClient()
router = APIRouter()
session_store = DBSessionStore()
github_sso = GithubSSO(
config.GITHUB_CLIENT_ID, config.GITHUB_CLIENT_SECRET, f"{config.HOST}/v1/callback"
)
File
server.py
in the folderbackend/openui
- line 68Set the
base_url
to whatever you wish.openai = AsyncOpenAI( base_url="https://YOUR-URL/v1" ) # AsyncOpenAI(base_url="http://127.0.0.1:11434/v1") ollama = AsyncClient() router = APIRouter() session_store = DBSessionStore() github_sso = GithubSSO( config.GITHUB_CLIENT_ID, config.GITHUB_CLIENT_SECRET, f"{config.HOST}/v1/callback" )
Did not work for me. Could you please explain further? I set it like this
But got:
Update
I managed to make it work with lmstudio, follow the instructions below:
- open the server.py in the backend directory of your openui folder
- modify from lines 68, just copy and paste these:
openai = AsyncOpenAI( base_url="http://localhost:1234/v1" )
- now type:
(openui) H:\openui\backend>set OPENAI_API_KEY=http://localhost:1234/v1
(openui) H:\openui\backend>echo %OPENAI_API_KEY% http://localhost:1234/v1
-
start your local server in lmstudio (chat python) with a model loaded on.
-
(openui) H:\openui\backend>python -m openui
-
ENJOY