Support for Oobabooga text-generation-webui
Validations
- [ ] I believe this is a way to improve. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that requests the same enhancement
Problem
I can't seem to find how to configure continue to use oobabooga with openai API plugin.
Solution
Probably it could just be added as further local provider using the OpenAI API and another base url. Possibly adding ChatGPT and then changing the base url would also work, but I don't see any option for changing the OpenAI base url in continue.
Hi @allo- , I'm not familiar with oobaboooga but it sounds like you might be able to use our existing config for OpenAI compatible servers: https://docs.continue.dev/customize/model-providers/openai#openai-compatible-servers--apis
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
I can't test right now, but if it can be configured with a custom endpoint for the OpenAI API, it will probably work.
Any update on this? How would I add oobabooga as a custom endpoint?
Hi @allo- , I'm not familiar with oobaboooga but it sounds like you might be able to use our existing config for OpenAI compatible servers: https://docs.continue.dev/customize/model-providers/openai#openai-compatible-servers--apis
I've been messing with this for hours and cannot figure out how to get it to work with oobabooga. Changed apiBase to http:127.0.0.1:5000/v1 as per ooba's instructions and nothing but errors.
If anyone has a working config.yaml for oobabooga please share.
Reference: https://docs.continue.dev/customize/model-providers/more/textgenwebui
This code is all pretty dynamic, as my continue code updated again just a few days ago. I've just starting to use continue, this is (mostly) what Gemini-flash told me to try. I think the key you are looking for is the line "provider: text-gen-webui"; as using openai can work, but it wasn't reliable for me. This yaml is working for me for continue chat, but I don't have the syntax for "enable_thinking: true: " working yet. -> So, I too need help with this.
name: TextGenWebui version: 1.0.0 schema: v1 models:
- name: GLM-4.5-Air-Python-Coder
provider: text-gen-webui #text-gen-webui
model: GLM-4.5-Air-Q4_K_M.gguf # actual model name
apiBase: http://localhost:5000/v1
env: useLegacyCompletionsEndpoint: false # false for most cases roles:- chat
- edit
- apply
defaultCompletionOptions:
temperature: 0.3
contextLength: 131072
maxTokens: 4096
topP: 0.3
topK: 40
stop: ["<|fim_middle|>", "<|file_separator|>"] # Qwen autocomplete
requestOptions:
headers: { "Content-Type": "application/json" }
extraBodyProperties: chat_template_kwargs: # needed for TGW? enable_thinking: false # doesn't register :( chatOptions: baseSystemMessage: | "You are an expert Python developer with deep knowledge of performance optimization and best practices. When asked for code, provide well-documented, idiomatic, and efficient Python solutions." context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase