terminal
terminal copied to clipboard
[Scenario]: Terminal Chat
We'll be using this thread to track the ongoing work for Copilot for the Windows Terminal.
This is not something that's in public preview yet.
More details will follow soon!
Initial PR: #16285
### Tasks
- [ ] #16401
- [ ] https://github.com/microsoft/terminal/issues/16435
- [ ] https://github.com/microsoft/terminal/issues/16442
- [ ] https://github.com/microsoft/terminal/issues/16485
- [ ] https://github.com/microsoft/terminal/issues/16484
Seemed not working for me, even the network request success
network request && response:
Seemed not working for me, even the network request success
Thanks for letting us know about this! What is the endpoint you are using?
The endpoint I use is
https://spark-01.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=2023-05-15
@WeihanLi do you have any content filters on for that endpoint? We expect the endpoint to have content filters on with severity = safe
do you have any content filters on for that endpoint?
I think so, it may relate to this. I'm using the default settings, the content filter could exists
Could you check that? I think your content filters might be switched off, just based on the screenshot you sent. With the content filters on, the JSON response should contain a prompt_filter_results or prompt_annotations field
No item for the content filter page
Are there more places I need to check?
Same error here now on any deployments. Even my old deployments no longer work and get this error.
There does not appear to be any way to set any of the content filtering to "safe" via the Playground UI. You can only set Low-Medium-High.
@mikenelson-io I had that too so I went into content filters and turned everything on and highest level and then I cleared credentials in win term and it worked...
The troubleshooting experience could really use some ❤️... i.e. logging more info on failures
@PankajBhojwani I see you PR for supporting OpenAI in Terminal Chat: https://github.com/microsoft/terminal/pull/17540 Could you please be considering support self-hosting LLM models? Since different model may use different request headers. A similar feature has been supported in an extension of VScode, see: https://docs.continue.dev/setup/configuration, Section: Self-hosting an open-source model User can specify its own request headers by setting config.json. Looking for Terminal Chat can support more self-hosting opensource models.
Could you please be considering support self-hosting LLM models?
@nguyen-dows thoughts on this?
Still no resolution for me on this issue. With or without content filters, I always get the message about the model not being safe. Created a new model, new filters, etc.