o1-preview support with azure openai
Before submitting your bug report
- [ ] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that reports the same bug
- [ ] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: macOS
- Continue: preview release
- IDE: VScode
- Model: o1-preview
Description
I've got it configured and connected. No errors.
However, nothing comes back as a response.
Related: https://github.com/continuedev/continue/issues/2250
I'll add more detail tomorrow, but wanted to get this posted in case it was a known issue and just not tracked yet. Also since it's azure openai and specific to using the o1-preview, not sure I can really give steps to reproduce on it without some more specific questions.
To reproduce
No response
Log output
No response
hi @sheldonhull , please share the additional details when you're able to! No obvious ideas on my end as to what could be causing issues.
Ok, first error I found
"msg": {
"messageId": "7027004f-5a5b-46d9-a066-f7270b4ef76c",
"messageType": "llm/streamChat",
"data": {
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "what model is this"
}
]
}
],
"title": "azure-openai-o1-preview",
"completionOptions": {}
}
}
}
Error: HTTP 400 model_error from https://{MYSUBDOMAIN}.openai.azure.com/openai/deployments/o1-preview/chat/completions?api-version=2024-09-01-preview
{
"error": {
"message": "Invalid type for 'max_completion_tokens': expected an integer, but got a decimal number instead.",
"type": "invalid_request_error",
"param": "max_completion_tokens",
"code": "invalid_type"
}
}
I've updated my config to include both of these assuming ya'll remapped based on other PR.
"completionOptions": {
"maxTokens": 32768
}
No results so far. When I get time I'll try and post more details.
Hi, I encountered a similar bug when using either o1-preview or o1-mini from OpenRouter. The request is clearly sent, but no response is visible, it happened when I switched to the o1's after a previous conversation (with another model), I've also used select code in the previous convo and attached one context item. Opening a new session and directly asking the o1's works fine
Windows 10/ v 0.9.214 prerelease/ VScode / OepnRouter o1-mini and preview
Try this: first letter of alphabet, then return. ignore any tool function calls.
I think the final issue is that it doesn't support tool function calls. Not sure if continue is trying to integrate those behind the scenes, but when I told it to ignore tool function calls it finally worked.
extension host error detail
[Extension Host] HTTP 400 Bad Request from https://{MYSUBDOMAIN}.openai.azure.com/openai/deployments/o1-preview/completions?api-version=2024-09-01-preview
{"error":{"code":"OperationNotSupported","message":"The completion operation does not work with the specified model, o1-preview. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993."}}
Code: undefined
Error number: undefined
Syscall: undefined
Type: undefined
Error: HTTP 400 Bad Request from https://{MYSUBDOMAIN}.openai.azure.com/openai/deployments/o1-preview/completions?api-version=2024-09-01-preview
{"error":{"code":"OperationNotSupported","message":"The completion operation does not work with the specified model, o1-preview. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993."}}
at customFetch (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:105973:21)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at withExponentialBackoff (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:105688:26)
at Azure._legacystreamComplete (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:485665:26)
at Azure._streamChat (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:485681:28)
at Azure._streamComplete (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:485654:26)
at Azure.streamComplete (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:106103:26)
at streamLines (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:87849:22)
at filterEnglishLinesAtStart (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95761:20)
at filterCodeBlockLines (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95735:20)
at filterEnglishLinesAtEnd (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95781:20)
at stopAtLines (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95703:20)
at fixCodeLlamaFirstLineIndentation (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95803:20)
at streamWithNewLines (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:95667:20)
at Object.run (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:107223:26)
at runNodeJsSlashCommand (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:518796:28)
at i.value (/Users/{USERNAME}}/.vscode/extensions/continue.continue-0.9.214-darwin-arm64/out/extension.js:525218:29)
log entry with details on request
error details from webview message
[Extension Host] Error handling webview message: {
"msg": {
"messageId": "cf3f3a39-7a7d-4f43-bb05-1973726c5903",
"messageType": "command/run",
"data": {
"input": "/edit mymarkdowndoc.md (19-33)\n{MYCONTENT IN MARKDOWN}\n\n\ncan you improve the inline powershell?",
"history": [
{
"role": "user",
"content": "Take the file prefix and suffix into account, but only rewrite the code_to_edit.... more prior content rendered prompt"
}
],
"modelTitle": "azure-openai-o1-preview",
"slashCommandName": "edit",
"contextItems": [
{
"content": "some context",
"name": "myfile.go (1-86)",
"description": "{PACKAGENAME}/myfile.go (1-86)",
"id": {
"providerTitle": "code",
"itemId": "e33b92c1-012d-4adb-b27a-582265ed59b7"
},
"uri": {
"type": "file",
"value": "/Users/{USERNAME}/git/github.com/{USERNAME}/{project}/{PACKAGENAME}/myfile.go"
},
"editing": true
},
{
"content": "go content here",
"name": "myfile.go (1-86)",
"description": "{PACKAGENAME}/myfile.go (1-86)",
"id": {
"providerTitle": "code",
"itemId": "e33b92c1-012d-4adb-b27a-582265ed59b7"
},
"uri": {
"type": "file",
"value": "/Users/{USERNAME}/git/github.com/{USERNAME}/{project}/{PACKAGENAME}/myfile.go"
},
"editing": true
}
],
"historyIndex": 0,
"selectedCode": [
{
"filepath": "mymarkdowndoc.md (19-33)",
"range": {
"start": {
"line": 18,
"character": 0
},
"end": {
"line": 32,
"character": 0
}
}
}
]
}
}
}
Error: HTTP 400 Bad Request from https://{MYSUBDOMAIN}.{PACKAGENAME}.azure.com/{PACKAGENAME}/deployments/o1-preview/completions?api-version=2024-09-01-preview
{"error":{"code":"OperationNotSupported","message":"The completion operation does not work with the specified model, o1-preview. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993."}}
We don't do anything behind the scenes to try to ignore function calls.
It seems like the completions endpoint may not be valid for o1?
{"error":{"code":"OperationNotSupported","message":"The completion operation does not work with the specified model, o1-preview. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993."}}
@sheldonhull
[Extension Host] HTTP 400 Bad Request from https://{MYSUBDOMAIN}.openai.azure.com/openai/deployments/o1-preview/completions?api-version=2024-09-01-preview
The bug seems to be because o1-preview is using the legacy /completions endpoint, instead of /chat/completions. Fixing now.
To fix for now, you can set useLegacyCompletionsEndpoint: false in the model config in your config.json