gpt4free icon indicating copy to clipboard operation
gpt4free copied to clipboard

How to choose provider in openai library

Open Abdullrahman-Dawaliby opened this issue 1 year ago • 32 comments

Hello, I am using this repo for my python telegram bot https://github.com/n3d1117/chatgpt-telegram-bot/tree/main And they are using openai library, I want to select only one provider(actually, you and ChatBase only if possible) Can you please help as I am facing many problems with other providers and I am near to a beginner in python. BTW I am using the api (http://0.0.0.0:10000/v1) it's working but With many problems due to the failure in other providers. Sorry for both bad English and bad describing

Environment

  • python 3.11
  • My server is in Germany

Abdullrahman-Dawaliby avatar Feb 13 '24 22:02 Abdullrahman-Dawaliby

Hello, you can't select two providers in the API. But you can select on provider. Simply pass provider="You" in the create.

Edit: ok, this is maybe not possible too, if it's not your application.

hlohaus avatar Feb 14 '24 08:02 hlohaus

Hello, you can't select two providers in the API. But you can select on provider. Simply pass provider="You" in the create.

Edit: ok, this is maybe not possible too, if it's not your application.

I saw an option in g4f api --ignored-providers , am I able to ignore all providers and use only You provider using it? I tried to do so but I got this error : g4f api --ignored-providers 'AItianhu', 'AItianhuSpace', 'Acytoo', 'AiAsk', 'AiChatOnline', 'AiChatting', 'AiService', 'Aibn', 'Aichat', 'Ails', 'Aivvm', 'AsyncGeneratorProvider', 'AsyncProvider', 'Aura', 'Bard', 'BaseProvider', 'Berlin', 'Bestim', 'Bing', 'ChatAiGpt', 'ChatAnywhere', 'ChatBase', 'ChatForAi', 'Chatgpt4Online', 'ChatgptAi', 'ChatgptDemo', 'ChatgptDemoAi', 'ChatgptDuo', 'ChatgptFree', 'ChatgptLogin', 'ChatgptNext', 'ChatgptX', 'Chatxyz', 'CodeLinkAva', 'CreateImagesProvider', 'Cromicle', 'DeepInfra', 'DfeHub', 'EasyChat', 'Equing', 'FakeGpt', 'FastGpt', 'Forefront', 'FreeChatgpt', 'FreeGpt', 'GPTalk', 'GeekGpt', 'Gemini', 'GeminiProChat', 'GetGpt', 'Gpt6', 'GptChatly', 'GptForLove', 'GptGo', 'GptGod', 'GptTalkRu', 'H2o', 'Hashnode', 'HuggingChat', 'Koala', 'Komo', 'Liaobots', 'Llama2', 'Lockchat', 'MikuChat', 'MyShell', 'Myshell', 'OnlineGpt', 'Opchatgpts', 'OpenAssistant', 'OpenaiChat', 'PerplexityAi', 'PerplexityLabs', 'Phind', 'Pi', 'Poe', 'Raycast', 'RetryProvider', 'TalkAi', 'Theb', 'ThebApi', 'V50', 'Vercel', 'Vitalentum', 'Wewordle', 'Wuguokai', 'Ylokh', 'Yqcloud' And I got this: usage: g4f api [-h] [--bind BIND] [--debug DEBUG] [--ignored-providers {AItianhu,AItianhuSpace,Acytoo,AiAsk,AiChatOnline,AiChatting,AiService,Aibn,Aichat,Ails,Aivvm,AsyncGeneratorProvider,AsyncProvider,Aura,Bard,BaseProvider,Berlin,Bestim,Bing,ChatAiGpt,ChatAnywhere,ChatBase,ChatForAi,Chatgpt4Online,ChatgptAi,ChatgptDemo,ChatgptDemoAi,ChatgptDuo,ChatgptFree,ChatgptLogin,ChatgptNext,ChatgptX,Chatxyz,CodeLinkAva,CreateImagesProvider,Cromicle,DeepInfra,DfeHub,EasyChat,Equing,FakeGpt,FastGpt,Forefront,FreeChatgpt,FreeGpt,GPTalk,GeekGpt,Gemini,GeminiProChat,GetGpt,Gpt6,GptChatly,GptForLove,GptGo,GptGod,GptTalkRu,H2o,Hashnode,HuggingChat,Koala,Komo,Liaobots,Llama2,Lockchat,MikuChat,MyShell,Myshell,OnlineGpt,Opchatgpts,OpenAssistant,OpenaiChat,PerplexityAi,PerplexityLabs,Phind,Pi,Poe,Raycast,RetryProvider,TalkAi,Theb,ThebApi,V50,Vercel,Vitalentum,Wewordle,Wuguokai,Ylokh,You,Yqcloud} [{AItianhu,AItianhuSpace,Acytoo,AiAsk,AiChatOnline,AiChatting,AiService,Aibn,Aichat,Ails,Aivvm,AsyncGeneratorProvider,AsyncProvider,Aura,Bard,BaseProvider,Berlin,Bestim,Bing,ChatAiGpt,ChatAnywhere,ChatBase,ChatForAi,Chatgpt4Online,ChatgptAi,ChatgptDemo,ChatgptDemoAi,ChatgptDuo,ChatgptFree,ChatgptLogin,ChatgptNext,ChatgptX,Chatxyz,CodeLinkAva,CreateImagesProvider,Cromicle,DeepInfra,DfeHub,EasyChat,Equing,FakeGpt,FastGpt,Forefront,FreeChatgpt,FreeGpt,GPTalk,GeekGpt,Gemini,GeminiProChat,GetGpt,Gpt6,GptChatly,GptForLove,GptGo,GptGod,GptTalkRu,H2o,Hashnode,HuggingChat,Koala,Komo,Liaobots,Llama2,Lockchat,MikuChat,MyShell,Myshell,OnlineGpt,Opchatgpts,OpenAssistant,OpenaiChat,PerplexityAi,PerplexityLabs,Phind,Pi,Poe,Raycast,RetryProvider,TalkAi,Theb,ThebApi,V50,Vercel,Vitalentum,Wewordle,Wuguokai,Ylokh,You,Yqcloud} ...]] g4f api: error: argument --ignored-providers: invalid choice: 'AItianhu,' (choose from 'AItianhu', 'AItianhuSpace', 'Acytoo', 'AiAsk', 'AiChatOnline', 'AiChatting', 'AiService', 'Aibn', 'Aichat', 'Ails', 'Aivvm', 'AsyncGeneratorProvider', 'AsyncProvider', 'Aura', 'Bard', 'BaseProvider', 'Berlin', 'Bestim', 'Bing', 'ChatAiGpt', 'ChatAnywhere', 'ChatBase', 'ChatForAi', 'Chatgpt4Online', 'ChatgptAi', 'ChatgptDemo', 'ChatgptDemoAi', 'ChatgptDuo', 'ChatgptFree', 'ChatgptLogin', 'ChatgptNext', 'ChatgptX', 'Chatxyz', 'CodeLinkAva', 'CreateImagesProvider', 'Cromicle', 'DeepInfra', 'DfeHub', 'EasyChat', 'Equing', 'FakeGpt', 'FastGpt', 'Forefront', 'FreeChatgpt', 'FreeGpt', 'GPTalk', 'GeekGpt', 'Gemini', 'GeminiProChat', 'GetGpt', 'Gpt6', 'GptChatly', 'GptForLove', 'GptGo', 'GptGod', 'GptTalkRu', 'H2o', 'Hashnode', 'HuggingChat', 'Koala', 'Komo', 'Liaobots', 'Llama2', 'Lockchat', 'MikuChat', 'MyShell', 'Myshell', 'OnlineGpt', 'Opchatgpts', 'OpenAssistant', 'OpenaiChat', 'PerplexityAi', 'PerplexityLabs', 'Phind', 'Pi', 'Poe', 'Raycast', 'RetryProvider', 'TalkAi', 'Theb', 'ThebApi', 'V50', 'Vercel', 'Vitalentum', 'Wewordle', 'Wuguokai', 'Ylokh', 'You', 'Yqcloud')

Abdullrahman-Dawaliby avatar Feb 14 '24 13:02 Abdullrahman-Dawaliby

You should separate list items in --ignored-providers with spaces not with commas Like g4f api --ignored-providers AItianhu AItianhuSpace ...

AndPim4912 avatar Feb 14 '24 15:02 AndPim4912

@hlohaus Is it possible to make passing the chosen provider not only via code, but also as an argument in the URL? E.g. http://localhost:1337/v1?provider=OpenaiChat or http://localhost:1337/v1/chat/completions?provider=OpenaiChat

iG8R avatar Feb 19 '24 19:02 iG8R

Why, when I try to interact with the g4f API through http://localhost:1337/v1/chat/completions and use the gpt-4 model, g4f never uses OpenaiChat, although via the WebUI all requests to OpenaiChat with the gpt-4 model are processed ok?

(venv) c:\gpt4free>g4f api --debug DEBUG
INFO:     Started server process [14772]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Liaobots: ClientResponseError: 401, message='Unauthorized', url=URL('https://liaobots.work/api/user')
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Liaobots: ClientResponseError: 524, message='', url=URL('https://liaobots.work/api/user')
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Liaobots: ClientResponseError: 401, message='Unauthorized', url=URL('https://liaobots.work/api/user')
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Liaobots: ClientResponseError: 401, message='Unauthorized', url=URL('https://liaobots.work/api/user')
Using Bing provider
INFO:     127.0.0.1:53817 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Liaobots: ClientResponseError: 401, message='Unauthorized', url=URL('https://liaobots.work/api/user')
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Using RetryProvider provider and gpt-4 model
Using Liaobots provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model
Using Bing provider
Using RetryProvider provider and gpt-4 model

I also tried to ignore all providers except OpenaiChat, but received only the "No provider found" error.

(venv) c:\gpt4free>g4f api --debug DEBUG --ignored-providers AItianhu AItianhuSpace Acytoo AiAsk AiChatOnline AiChatting AiService Aibn Aichat Ails Aivvm AsyncGeneratorProvider AsyncProvider Aura Bard BaseProvider Berlin Bestim Bing ChatAiGpt ChatAnywhere ChatBase ChatForAi Chatgpt4Online ChatgptAi ChatgptDemo ChatgptDemoAi ChatgptDuo ChatgptFree ChatgptLogin ChatgptNext ChatgptX Chatxyz CodeLinkAva CreateImagesProvider Cromicle DeepInfra DfeHub EasyChat Equing FakeGpt FastGpt Forefront FreeChatgpt FreeGpt GPTalk GeekGpt Gemini GeminiProCCFgptCN GeminiProChat GeminiProFreeChatGPT GeminiProGBeCo GeminiProTop GetGpt Gpt6 GptChatly GptForLove GptGo GptGod GptTalkRu H2o Hashnode HuggingChat Koala Komo Liaobots Llama2 Lockchat MikuChat MyShell Myshell OnlineGpt Opchatgpts OpenAssistant PerplexityAi PerplexityLabs Phind Pi Poe Raycast RetryProvider TalkAi Theb ThebApi V50 Vercel Vitalentum Wewordle Wuguokai Ylokh You Yqcloud
INFO:     Started server process [4520]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
Using RetryProvider provider and gpt-4 model
ERROR:root:No provider found
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 96, in chat_completions
    response = g4f.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\__init__.py", line 140, in create
    return result if stream else ''.join([str(chunk) for chunk in result])
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 65, in create_completion
    self.raise_exceptions()
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 118, in raise_exceptions
    raise RetryNoProviderError("No provider found")
g4f.errors.RetryNoProviderError: No provider found
INFO:     127.0.0.1:53159 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Using RetryProvider provider and gpt-4 model
ERROR:root:No provider found
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 96, in chat_completions
    response = g4f.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\__init__.py", line 140, in create
    return result if stream else ''.join([str(chunk) for chunk in result])
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 65, in create_completion
    self.raise_exceptions()
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 118, in raise_exceptions
    raise RetryNoProviderError("No provider found")
g4f.errors.RetryNoProviderError: No provider found
INFO:     127.0.0.1:53160 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error

PS. I used the g4f API to work in conjunction with the immersive-translate add-on https://immersivetranslate.com/en/ with the following settings

iG8R avatar Feb 19 '24 21:02 iG8R

I managed to overcome this issue, all you need is to modify the models.py file.

28:	    Bing,
29:	    OpenaiChat,
30:	    You,
...

89:	gpt_4 = Model(
90:	    name          = 'gpt-4',
91:	    base_provider = 'openai',
92:	    best_provider = RetryProvider([
93:	        OpenaiChat, Bing, Liaobots, 
94:	    ])
95:	)

But another one arose - after using and experimenting some time the OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status' error appears.

(venv) c:\gpt4free>g4f api --debug DEBUG --ignored-providers AItianhu AItianhuSpace Acytoo AiAsk AiChatOnline AiChatting AiService Aibn Aichat Ails Aivvm AsyncGeneratorProvider AsyncProvider Aura Bard BaseProvider Berlin Bestim Bing ChatAiGpt ChatAnywhere ChatBase ChatForAi Chatgpt4Online ChatgptAi ChatgptDemo ChatgptDemoAi ChatgptDuo ChatgptFree ChatgptLogin ChatgptNext ChatgptX Chatxyz CodeLinkAva CreateImagesProvider Cromicle DeepInfra DfeHub EasyChat Equing FakeGpt FastGpt Forefront FreeChatgpt FreeGpt GPTalk GeekGpt Gemini GeminiProCCFgptCN GeminiProChat GeminiProFreeChatGPT GeminiProGBeCo GeminiProTop GetGpt Gpt6 GptChatly GptForLove GptGo GptGod GptTalkRu H2o Hashnode HuggingChat Koala Komo Liaobots Llama2 Lockchat MikuChat MyShell Myshell OnlineGpt Opchatgpts OpenAssistant PerplexityAi PerplexityLabs Phind Pi Poe Raycast RetryProvider TalkAi Theb ThebApi V50 Vercel Vitalentum Wewordle Wuguokai Ylokh You Yqcloud
INFO:     Started server process [13896]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Error reading cookies from chrome for chat.openai.com: [Errno 13] Permission denied: 'C:\\Users\\iG8R\\AppData\\Roaming\\..\\Local\\Google\\Chrome\\User Data\\Default\\Network\\Cookies'
Read cookies from firefox for chat.openai.com
Open browser with config dir: C:\Users\iG8R\AppData\Local\g4f\g4f
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56504 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56504 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56504 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56502 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56504 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56507 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56506 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56505 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO:     127.0.0.1:56501 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56791 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
ERROR:root:RetryProvider failed:
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 96, in chat_completions
    response = g4f.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\__init__.py", line 140, in create
    return result if stream else ''.join([str(chunk) for chunk in result])
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 65, in create_completion
    self.raise_exceptions()
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 114, in raise_exceptions
    raise RetryProviderError("RetryProvider failed:\n" + "\n".join([
g4f.errors.RetryProviderError: RetryProvider failed:
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Using RetryProvider provider and gpt-4 model
Using OpenaiChat provider
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
ERROR:root:RetryProvider failed:
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\api\__init__.py", line 96, in chat_completions
    response = g4f.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\__init__.py", line 140, in create
    return result if stream else ''.join([str(chunk) for chunk in result])
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 65, in create_completion
    self.raise_exceptions()
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\retry_provider.py", line 114, in raise_exceptions
    raise RetryProviderError("RetryProvider failed:\n" + "\n".join([
g4f.errors.RetryProviderError: RetryProvider failed:
OpenaiChat: AttributeError: 'StreamResponse' object has no attribute 'status'
INFO:     127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error

Even through the WebUI, the gpt-4 model with the OpenaiChat provider brings up the same error, although with gpt-3.5 models it still continues to work fine.

Using OpenaiChat provider and gpt-3.5-turbo model
127.0.0.1 - - [20/Feb/2024 00:55:03] "POST /backend-api/v2/conversation HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [20/Feb/2024 00:55:03] "POST /backend-api/v2/conversation HTTP/1.1" 200 -
Using OpenaiChat provider and gpt-4 model
ERROR:root:'StreamResponse' object has no attribute 'status'
Traceback (most recent call last):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\gui\server\backend.py", line 191, in _create_response_stream
    for chunk in ChatCompletion.create(**kwargs):
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\base_provider.py", line 205, in create_completion
    yield loop.run_until_complete(await_callback(gen.__anext__))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\asyncio\base_events.py", line 664, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\base_provider.py", line 201, in await_callback
    return await callback()
           ^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\needs_auth\OpenaiChat.py", line 388, in create_async_generator
    raise RuntimeError(f"Response {response.status}: {await response.text()}")
                                   ^^^^^^^^^^^^^^^
AttributeError: 'StreamResponse' object has no attribute 'status'
127.0.0.1 - - [20/Feb/2024 00:55:17] "POST /backend-api/v2/conversation HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [20/Feb/2024 00:55:17] "POST /backend-api/v2/conversation HTTP/1.1" 200 -

iG8R avatar Feb 19 '24 23:02 iG8R

OpenaiChat support GPT-4 only with a plus account. Try to use "You" provider. It supports GPT-4 without auth.

hlohaus avatar Feb 22 '24 14:02 hlohaus

Hm, in my case I can freely make requests to the gpt-4 model. image

iG8R avatar Feb 22 '24 14:02 iG8R

@iG8R so have you found a way to select only one? Even if I ignored all other providers except for You one it still try them first.

Abdullrahman-Dawaliby avatar Feb 22 '24 14:02 Abdullrahman-Dawaliby

@Abdullrahman-Dawaliby

Yes, I can use only desired provider, in my case it is OpenaiChat. You can look at the logs I posted earlier. Also don't forget to check is your provider supports the model you use in your requests.

Here is the console command to exclude all providers but leave only OpenaiChat:

g4f api --debug DEBUG --ignored-providers AItianhu AItianhuSpace Acytoo AiAsk AiChatOnline AiChatting AiService Aibn Aichat Ails Aivvm AsyncGeneratorProvider AsyncProvider Aura Bard BaseProvider Berlin Bestim Bing ChatAiGpt ChatAnywhere ChatBase ChatForAi Chatgpt4Online ChatgptAi ChatgptDemo ChatgptDemoAi ChatgptDuo ChatgptFree ChatgptLogin ChatgptNext ChatgptX Chatxyz CodeLinkAva CreateImagesProvider Cromicle DeepInfra DfeHub EasyChat Equing FakeGpt FastGpt Forefront FreeChatgpt FreeGpt GPTalk GeekGpt Gemini GeminiProCCFgptCN GeminiProChat GeminiProFreeChatGPT GeminiProGBeCo GeminiProTop GetGpt Gpt6 GptChatly GptForLove GptGo GptGod GptTalkRu H2o Hashnode HuggingChat Koala Komo Liaobots Llama2 Lockchat MikuChat MyShell Myshell OnlineGpt Opchatgpts OpenAssistant PerplexityAi PerplexityLabs Phind Pi Poe Raycast RetryProvider TalkAi Theb ThebApi V50 Vercel Vitalentum Wewordle Wuguokai Ylokh You Yqcloud

iG8R avatar Feb 22 '24 15:02 iG8R

@hlohaus

Try to use "You" provider. It supports GPT-4 without auth.

I tried to use the You provider, but it seems to be worse than OpenaiChat in terms of quality of responses and the number of tokens it can handle per 1 request. I also tried using Bing with gpt-4 in the g4f API - it's so slow that I decided I'd rather not use it than test my nerves.

iG8R avatar Feb 22 '24 15:02 iG8R

@hlohaus

Try to use "You" provider. It supports GPT-4 without auth.

I tried to use the You provider, but it seems to be worse than OpenaiChat in terms of quality of responses and the number of tokens it can handle per 1 request. I also tried using Bing with gpt-4 in the g4f API - it's so slow that I decided I'd rather not use it than test my nerves.

Do OpenaiChat need auth or any cookies, I have used You just because I need only gpt-3.5-turbo without any problems or auth. Thank you btw.

Abdullrahman-Dawaliby avatar Feb 22 '24 15:02 Abdullrahman-Dawaliby

@Abdullrahman-Dawaliby To use OpenaiChat you need to log in to their site auth0.openai.com

iG8R avatar Feb 22 '24 15:02 iG8R

@iG8R I added your provider param idea:

http://localhost:1337/v1/chat/completions?provider=You

I can now use the translation service on my phone. Also g4f runs on my phone.

hlohaus avatar Feb 23 '24 04:02 hlohaus

@hlohaus Oh! Thanks a lot! Does it support several providers in URL parameters? E.g. http://localhost:1337/v1/chat/completions?provider=You,OpenaiChat

iG8R avatar Feb 23 '24 04:02 iG8R

Unfortunately, something went wrong:

(venv) c:\gpt4free>g4f api --debug DEBUG
INFO:     Started server process [21076]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55097 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:55133 - "POST /v1/chat/completions?provider=You HTTP/1.1" 422 Unprocessable Entity

And if using the host name without parameters:

(venv) c:\gpt4free>g4f api --debug DEBUG --ignored-providers AItianhu AItianhuSpace Acytoo AiAsk AiChatOnline AiChatting AiService Aibn Aichat Ails Aivvm AsyncGeneratorProvider AsyncProvider Aura Bard BaseProvider Berlin Bestim Bing ChatAiGpt ChatAnywhere ChatBase ChatForAi Chatgpt4Online ChatgptAi ChatgptDemo ChatgptDemoAi ChatgptDuo ChatgptFree ChatgptLogin ChatgptNext ChatgptX Chatxyz CodeLinkAva CreateImagesProvider Cromicle DeepInfra DfeHub EasyChat Equing FakeGpt FastGpt Forefront FreeChatgpt FreeGpt GPTalk GeekGpt Gemini GeminiProCCFgptCN GeminiProChat GeminiProFreeChatGPT GeminiProGBeCo GeminiProTop GetGpt Gpt6 GptChatly GptForLove GptGo GptGod GptTalkRu H2o Hashnode HuggingChat Koala Komo Liaobots Llama2 Lockchat MikuChat MyShell Myshell OnlineGpt Opchatgpts OpenAssistant PerplexityAi PerplexityLabs Phind Pi Poe Raycast RetryProvider TalkAi Theb ThebApi V50 Vercel Vitalentum Wewordle Wuguokai Ylokh You Yqcloud
INFO:     Started server process [13064]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     127.0.0.1:55148 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:52067 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity

iG8R avatar Feb 23 '24 05:02 iG8R

Which request body do you use? Do you use the newest version from git or pip?

hlohaus avatar Feb 23 '24 12:02 hlohaus

I used the last build from the releases page - https://github.com/xtekky/gpt4free/archive/refs/tags/0.2.1.8.zip With this release https://github.com/xtekky/gpt4free/archive/refs/tags/0.2.1.6.zip everything is fine (obviously, I don't use the URL parameters, which is new in the new version), when I used the g4f API as "proxy" for immersive-translate mentioned in the post above https://github.com/xtekky/gpt4free/issues/1584#issuecomment-1953153770

iG8R avatar Feb 23 '24 12:02 iG8R

On my smartphone the new parameter works:

Screenshot_2024-02-23-17-39-38-197_ru.iiec.pydroid3.jpg

hlohaus avatar Feb 23 '24 16:02 hlohaus

@hlohaus Could you please write the request you used?

iG8R avatar Feb 23 '24 17:02 iG8R

I tested this for example:

from openai import OpenAI

client = OpenAI(
    api_key="asdf",
    base_url="http://localhost:1337/v1"
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    extra_body={"provider": "You"},
    messages=[{"role": "user", "content": "write a poem about a tree"}],
)
print(response.choices[0].message.content)

hlohaus avatar Feb 23 '24 18:02 hlohaus

Some additional info: g4f log from the terminal

(venv) c:\gpt4free>g4f api --debug DEBUG
INFO:     Started server process [17712]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:49534 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity

Request from the plugin's browser console:

{
	"messages": [
		{
			"content": "You are a professional, authentic translation engine, only returns translations, without such a phrase as \"The text you provided appears to be in Chinese. Here is the English translation:\" or \"Here is the English translation of the text:\" or \"The text you provided appears to be a passage from a novel or a story. Here is the English translation:\"",
			"role": "system"
		},
		{
			"content": "Translate the following text from Traditional Chinese to English:",
			"role": "user"
		},
		{
			"content": "第1293章 陽神可滅,落子無悔!(2/2)_執魔_蝌蚪小說",
			"role": "user"
		}
	],
	"model": "gpt-3.5-turbo",
	"temperature": 0
}

image

Response from the plugin's browser console:

{
	"detail": [
		{
			"type": "missing",
			"loc": [
				"body",
				"provider"
			],
			"msg": "Field required",
			"input": {
				"model": "gpt-3.5-turbo",
				"temperature": 0,
				"messages": [
					{
						"role": "system",
						"content": "You are a professional, authentic translation engine, only returns translations, without such a phrase as \"The text you provided appears to be in Chinese. Here is the English translation:\" or \"Here is the English translation of the text:\" or \"The text you provided appears to be a passage from a novel or a story. Here is the English translation:\""
					},
					{
						"role": "user",
						"content": "Translate the following text from Traditional Chinese to English:"
					},
					{
						"role": "user",
						"content": "第1293章 陽神可滅,落子無悔!(2/2)_執魔_蝌蚪小說"
					}
				]
			},
			"url": "https://errors.pydantic.dev/2.5/v/missing"
		},
		{
			"type": "missing",
			"loc": [
				"body",
				"stop"
			],
			"msg": "Field required",
			"input": {
				"model": "gpt-3.5-turbo",
				"temperature": 0,
				"messages": [
					{
						"role": "system",
						"content": "You are a professional, authentic translation engine, only returns translations, without such a phrase as \"The text you provided appears to be in Chinese. Here is the English translation:\" or \"Here is the English translation of the text:\" or \"The text you provided appears to be a passage from a novel or a story. Here is the English translation:\""
					},
					{
						"role": "user",
						"content": "Translate the following text from Traditional Chinese to English:"
					},
					{
						"role": "user",
						"content": "第1293章 陽神可滅,落子無悔!(2/2)_執魔_蝌蚪小說"
					}
				]
			},
			"url": "https://errors.pydantic.dev/2.5/v/missing"
		},
		{
			"type": "missing",
			"loc": [
				"body",
				"access_token"
			],
			"msg": "Field required",
			"input": {
				"model": "gpt-3.5-turbo",
				"temperature": 0,
				"messages": [
					{
						"role": "system",
						"content": "You are a professional, authentic translation engine, only returns translations, without such a phrase as \"The text you provided appears to be in Chinese. Here is the English translation:\" or \"Here is the English translation of the text:\" or \"The text you provided appears to be a passage from a novel or a story. Here is the English translation:\""
					},
					{
						"role": "user",
						"content": "Translate the following text from Traditional Chinese to English:"
					},
					{
						"role": "user",
						"content": "第1293章 陽神可滅,落子無悔!(2/2)_執魔_蝌蚪小說"
					}
				]
			},
			"url": "https://errors.pydantic.dev/2.5/v/missing"
		}
	]
}

image

iG8R avatar Feb 23 '24 18:02 iG8R

I tested this for example:

from openai import OpenAI

client = OpenAI(
    api_key="asdf",
    base_url="http://localhost:1337/v1"
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    extra_body={"provider": "You"},
    messages=[{"role": "user", "content": "write a poem about a tree"}],
)
print(response.choices[0].message.content)
(venv) c:\gpt4free>test.py
Traceback (most recent call last):
  File "C:\gpt4free\test.py", line 8, in <module>
    response = client.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\resources\chat\completions.py", line 663, in create
    return self._post(
           ^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 889, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.UnprocessableEntityError: Error code: 422 - {'detail': [{'type': 'missing', 'loc': ['body', 'temperature'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}, {'type': 'missing', 'loc': ['body', 'stop'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}, {'type': 'missing', 'loc': ['body', 'access_token'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}]}

(venv) c:\gpt4free>python
Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct  2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from openai import OpenAI
>>>
>>> client = OpenAI(
...     api_key="asdf",
...     base_url="http://localhost:1337/v1"
... )
>>>
>>> response = client.chat.completions.create(
...     model="gpt-3.5-turbo",
...     extra_body={"provider": "You"},
...     messages=[{"role": "user", "content": "write a poem about a tree"}],
... )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "c:\gpt4free\venv\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\resources\chat\completions.py", line 663, in create
    return self._post(
           ^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 889, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "c:\gpt4free\venv\Lib\site-packages\openai\_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.UnprocessableEntityError: Error code: 422 - {'detail': [{'type': 'missing', 'loc': ['body', 'temperature'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}, {'type': 'missing', 'loc': ['body', 'stop'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}, {'type': 'missing', 'loc': ['body', 'access_token'], 'msg': 'Field required', 'input': {'messages': [{'role': 'user', 'content': 'write a poem about a tree'}], 'model': 'gpt-3.5-turbo', 'provider': 'You'}, 'url': 'https://errors.pydantic.dev/2.5/v/missing'}]}
>>> print(response.choices[0].message.content)

PS. IDKW, but g4f didn't install the openai module, so I got the following error at the beginning:

(venv) c:\gpt4free>test.py
Traceback (most recent call last):
  File "C:\gpt4free\test.py", line 1, in <module>
    from openai import OpenAI
ModuleNotFoundError: No module named 'openai'

iG8R avatar Feb 23 '24 18:02 iG8R

The same with version 0.2.1.6.

(venv) c:\gpt4free>test.py
In a realm where nature thrives,
Stands a being that forever thrives.
Its roots run deep, its branches reach high,
A majestic presence against the sky.

A tree, a symbol of strength and grace,
A witness to time's eternal embrace.
Its trunk, a pillar of steadfast might,
With bark that tells tales of ancient plight.

Oh, how it dances with the gentle breeze,
Leaves rustling in harmonious ease.
Each leaf, a brushstroke of vibrant hue,
Painting a picture of life anew.

From its boughs, birds sing their sweet refrain,
Nesting amidst its shelter, free from pain.
Their melodies, a chorus of pure delight,
Guiding lost souls through the darkest night.

Beneath its shade, weary travelers rest,
Finding solace in nature's loving nest.
The tree, a sanctuary of peace and calm,
Offering respite with open arms.

Seasons change, but the tree stands strong,
Welcoming the sun and enduring the long.
In spring, it blooms with blossoms fair,
In autumn, it sheds its leaves with care.

Through summer's heat and winter's chill,
The tree stands tall, unwavering will.
A testament to resilience and might,
A guardian of nature's sacred light.

Oh, tree, your beauty knows no end,
A cherished companion, faithful friend.
May your roots forever anchor deep,
As you watch over this world, secrets to keep.

(venv) c:\gpt4free>python
Python 3.12.0 (tags/v3.12.0:0fb18b0, Oct  2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from openai import OpenAI
>>>
>>> client = OpenAI(
...     api_key="asdf",
...     base_url="http://localhost:1337/v1"
... )
>>>
>>> response = client.chat.completions.create(
...     model="gpt-3.5-turbo",
...     extra_body={"provider": "You"},
...     messages=[{"role": "user", "content": "write a poem about a tree"}],
... )
>>> print(response.choices[0].message.content)
In a realm where whispers dance on the breeze,
Stands a noble creature, tall and serene.
Its branches reach out with grace and might,
A symphony of green against the light.

Roots dig deep, anchoring the earth,
A steadfast guardian of nature's worth.
Through seasons' ebb and flow, it remains,
A witness to life's joys and pains.

Beneath its shelter, creatures find solace,
From chirping birds to timid forest dwellers.
A sanctuary, where dreams take flight,
A refuge bathed in golden sunlight.

With every passing year, it grows,
A testament to resilience, nature's prose.
Its leaves, a tapestry of vibrant hues,
A kaleidoscope painting, nature's muse.

In spring, it blooms with blossoms fair,
A delicate crown upon its regal air.
In summer, a canopy of cool shade,
A respite from the sun's relentless raid.

Autumn arrives, and leaves cascade,
A fiery rain, nature's parade.
They fall like whispers from the sky,
As the tree prepares for winter's sigh.

Beneath its boughs, time stands still,
A sanctuary where hearts can heal.
For in this tree, a story unfolds,
Of life's cycles, of stories untold.

So let us marvel at this ancient being,
A testament to life's boundless meaning.
In its presence, we find solace and grace,
A tree's embrace, a sacred space.
(venv) c:\gpt4free>g4f api --debug DEBUG
INFO:     Started server process [19316]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
New g4f version: 0.2.1.8 (current: 0.0.0) | pip install -U g4f
Using You provider and gpt-3.5-turbo model
INFO:     127.0.0.1:52445 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using GptGo provider
GptGo: KeyError: 'choices'
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using Chatgpt4Online provider
Chatgpt4Online: RuntimeError: No nonce found
Using GptForLove provider
GptForLove: ClientConnectorCertificateError: Cannot connect to host api.gptplus.one:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1000)')]
Using You provider
INFO:     127.0.0.1:52480 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using GptGo provider
GptGo: KeyError: 'choices'
Using You provider
INFO:     127.0.0.1:52549 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using GptGo provider
GptGo: KeyError: 'choices'
Using GptForLove provider
GptForLove: ClientConnectorCertificateError: Cannot connect to host api.gptplus.one:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1000)')]
Using Chatgpt4Online provider
Chatgpt4Online: RuntimeError: No nonce found
Using You provider
INFO:     127.0.0.1:52549 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using GptGo provider
GptGo: KeyError: 'choices'
Using Chatgpt4Online provider
Chatgpt4Online: RuntimeError: No nonce found
Using GptForLove provider
GptForLove: ClientConnectorCertificateError: Cannot connect to host api.gptplus.one:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1000)')]
Using You provider
INFO:     127.0.0.1:52605 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using GptGo provider
GptGo: KeyError: 'choices'
Using Chatgpt4Online provider
Chatgpt4Online: RuntimeError: No nonce found
Using You provider
INFO:     127.0.0.1:52659 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using ChatBase provider
ChatBase: ClientResponseError: 400, message='Bad Request', url=URL('https://www.chatbase.co/api/fe/chat')
Using You provider
INFO:     127.0.0.1:52697 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using RetryProvider provider and gpt-3.5-turbo model
Using GptForLove provider
GptForLove: ClientConnectorCertificateError: Cannot connect to host api.gptplus.one:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1000)')]
Using GptGo provider
GptGo: KeyError: 'choices'
Using You provider
INFO:     127.0.0.1:52697 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using You provider and gpt-3.5-turbo model
INFO:     127.0.0.1:52720 - "POST /v1/chat/completions HTTP/1.1" 200 OK

iG8R avatar Feb 23 '24 18:02 iG8R

yes, of course. The pydantic error message can maybe be this: https://github.com/xtekky/gpt4free/pull/1617/files#diff-916ba29538dcda9bda09dfaedee4f7bd72bdfde8ddb576320052b7ee46bf0340 I use only there the BaseModel from pydantic. Can you test this changes?

hlohaus avatar Feb 23 '24 18:02 hlohaus

With pleasure :)

iG8R avatar Feb 23 '24 18:02 iG8R

Hm... I manually added some additional providers, also accordingly corrected \g4f\models.py and \g4f\Provider\__init__.py and everything worked just fine in versions 0.2.1.6 and 0.2.1.8, but after your above modifications I got:

(venv) c:\gpt4free>g4f gui -debug
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "c:\gpt4free\venv\Scripts\g4f.exe\__main__.py", line 4, in <module>
  File "c:\gpt4free\venv\Lib\site-packages\g4f\__init__.py", line 6, in <module>
    from .models   import Model, ModelUtils
  File "c:\gpt4free\venv\Lib\site-packages\g4f\models.py", line 3, in <module>
    from .Provider   import RetryProvider, ProviderType
  File "c:\gpt4free\venv\Lib\site-packages\g4f\Provider\__init__.py", line 38, in <module>
    from .GeminiProGBeCo  import GeminiProGBeCo
ModuleNotFoundError: No module named 'g4f.Provider.GeminiProGBeCo'

iG8R avatar Feb 23 '24 18:02 iG8R

yes, of course. The pydantic error message can maybe this: https://github.com/xtekky/gpt4free/pull/1617/files#diff-916ba29538dcda9bda09dfaedee4f7bd72bdfde8ddb576320052b7ee46bf0340 I use only there the BaseModel from pydantic. Can you test this changes?

(venv) c:\gpt4free>g4f api --debug DEBUG
INFO:     Started server process [18208]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
Using You provider and gpt-3.5-turbo model
INFO:     127.0.0.1:60750 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Using OpenaiChat provider and gpt-3.5-turbo model
Error reading cookies from chrome for chat.openai.com: [Errno 13] Permission denied: 'C:\\Users\\iG8R\\AppData\\Roaming\\..\\Local\\Google\\Chrome\\User Data\\Default\\Network\\Cookies'
Read cookies from firefox for chat.openai.com
Open browser with config dir: C:\Users\iG8R\AppData\Local\g4f\g4f
Using OpenaiChat provider and gpt-3.5-turbo model
Using OpenaiChat provider and gpt-3.5-turbo model
Using OpenaiChat provider and gpt-3.5-turbo model
Using OpenaiChat provider and gpt-3.5-turbo model
Using OpenaiChat provider and gpt-3.5-turbo model
INFO:     127.0.0.1:59741 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
INFO:     127.0.0.1:59739 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
INFO:     127.0.0.1:59738 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
INFO:     127.0.0.1:59745 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
INFO:     127.0.0.1:59744 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
INFO:     127.0.0.1:59735 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK

It works! It works! :) But now I need somehow to return my previously manually added providers...

iG8R avatar Feb 23 '24 19:02 iG8R

@hlohaus Please, could you clarify, why every time I get the error after updating to version 0.2.1.8 after I manually added a provider, e.g. GeminiProGBeCo? I made епу following \g4f\models.py

6:	from .Provider   import (
7:	    Chatgpt4Online,
8:	    PerplexityLabs,
9:	    ChatgptDemoAi,
10:	    GeminiProChat,
11:	    GeminiProGBeCo,

\g4f\Provider_init_.py

35:	from .FreeGpt         import FreeGpt
36:	from .GeekGpt         import GeekGpt
37:	from .GeminiProChat   import GeminiProChat
38:	from .GeminiProGBeCo  import GeminiProGBeCo

Added \g4f\Provider\GeminiProGBeСo.py

from __future__ import annotations

import time
from hashlib import sha256
from aiohttp import ClientSession

from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider


class GeminiProGBeCo(AsyncGeneratorProvider):
    url = "https://g.be.co"
    working = True
    supports_gpt_35_turbo = True

    @classmethod
    async def create_async_generator(
        cls,
        model: str,
        messages: Messages,
        proxy: str = None,
        **kwargs
    ) -> AsyncResult:
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0",
            "Accept": "*/*",
            "Accept-Language": "en-US,en;q=0.5",
            "Accept-Encoding": "gzip, deflate, br",
            "Content-Type": "text/plain;charset=UTF-8",
            "Referer": "https://g.be.co/",
            "Origin": "https://g.be.co",
            "Sec-Fetch-Dest": "empty",
            "Sec-Fetch-Mode": "cors",
            "Sec-Fetch-Site": "same-origin",
            "Connection": "keep-alive",
            "TE": "trailers",
        }
        async with ClientSession(headers=headers) as session:
            timestamp = int(time.time() * 1e3)
            data = {
                "messages":[{
                    "role": "model" if message["role"] == "assistant" else "user",
                    "parts": [{"text": message["content"]}]
                } for message in messages],
                "time": timestamp,
                "pass": None,
                "sign": generate_signature(timestamp, messages[-1]["content"]),
            }
            async with session.post(f"{cls.url}/api/generate", json=data, proxy=proxy) as response:
                response.raise_for_status()
                async for chunk in response.content.iter_any():
                    yield chunk.decode()
                        
def generate_signature(time: int, text: str, secret: str = ""):
    message = f'{time}:{text}:{secret}';
    return sha256(message.encode()).hexdigest()

iG8R avatar Feb 23 '24 19:02 iG8R

You don't need to change the g4f code. This import should also work:

from g4f.Provider.GeminiProGBeCo import GeminiProGBeCo

Also you can add them to list in your code:

from g4f.Provider import ProviderUtils
ProviderUtils.convert["GeminiProGBeCo"] = GeminiProGBeCo

hlohaus avatar Feb 23 '24 19:02 hlohaus