Othman El houfi
Othman El houfi
@0xpayne This is a highly important fix. When will it be available pls?
@Sineos I totally agree. GPT can't create/handle logic, even more if the code is broke down to chunks. The code quality is correlated with the dependency between variables, functions, libraries,...
@horizonchasers It refers to the Deployment ID of the model, for example it could be named `gpt-45-turbo` or `gpt-4-turbo` or other, it depends on deployment in your Azure OpenAI.
Same issue. ``` 14:50:04 [INFO] Output path: signal-2021-05-11-20-19-32 14:50:04 [INFO] Input file: Desktop/signal-2021-05-11-20-19-32.backup 14:50:04 [DEBUG] (1) signal_backup_decode::input: Frame type: Header Frame (salt: [55, AA, 0F, CD, 2A, BD, 38, E9,...
I just fixed the problem in my fork. Basically you don't take into account the fact that the HTTP request link for Azure OpenAI is different from OpenAI (https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference): ```...
You could do a mapping on deployment names to BetterGPT model names in your own fork. Or, to make it even simpler, change the deployment names in your OAI Azure...
I am trying to make some changes in order to connect with Microsoft Azure OpenAI, I can't find the 'new' instance of OpenAI API. Any help?
Finally I managed to edit the code so that it works with Azure OpenAI API, and it's pretty easy, you should edit these two functions in "Chat.svelte": ``` response =...
@Nashex Any ideas on how to fix this issue pls?
I am up to date with the main branch, I think maybe because I changed "a little bit" the request/response function so that it can work with the Azure OpenAI...