gpt4free
gpt4free copied to clipboard
[GeminiPro] Requesting via the API results in "404 Not Found"
Bug description
When I try to send requests to GeminiPro via the g4f API, the following errors occur, although requests to OpenaiChat go through fine. And it should be noted that GeminiPro works fine through the g4f WebUI.
I tried the following request paths:
http://127.0.0.1:1337/v1beta/models/gemini-pro
http://127.0.0.1:1337/v1beta/models/gemini-pro?provider=GeminiPro
(venv) c:\gpt4free>g4f api --debug DEBUG
INFO: Started server process [148]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO: 127.0.0.1:55721 - "POST /v1beta/models/gemini-pro%3AstreamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:50810 - "POST /v1beta/models/gemini-pro?provider=GeminiPro:streamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:52157 - "POST /v1beta/models/gemini-pro%3AstreamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 404 Not Found
Using OpenaiChat provider and gpt-3.5-turbo model
Read cookies from chrome for chat.openai.com
Read cookies from edge for chat.openai.com
Read cookies from firefox for chat.openai.com
Open browser with config dir: C:\Users\iG8R\AppData\Local\g4f\g4f
INFO: 127.0.0.1:52180 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
Using OpenaiChat provider and gpt-3.5-turbo model
INFO: 127.0.0.1:52225 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
Using OpenaiChat provider and gpt-3.5-turbo model
INFO: 127.0.0.1:52299 - "POST /v1/chat/completions?provider=OpenaiChat HTTP/1.1" 200 OK
You can use this url:
http://127.0.0.1:1337/v1/chat/completions?provider=GeminiPro
Unfortunately, the same result, using the following request paths:
http://127.0.0.1:1337/v1/chat/completions?provider=GeminiPro - 422 Unprocessable Entity
http://127.0.0.1:1337/v1beta/chat/completions?provider=GeminiPro - 404 Not Found
http://127.0.0.1:1337/v1beta/models/gemini-pro?provider=GeminiPro - 404 Not Found
http://127.0.0.1:1337/v1/models/gemini-pro?provider=GeminiPro - 405 Method Not Allowed
INFO: Started server process [13436]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:1337 (Press CTRL+C to quit)
INFO: 127.0.0.1:62786 - "POST /v1/chat/completions?provider=GeminiPro:streamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 422 Unprocessable Entity
INFO: 127.0.0.1:62790 - "POST /v1beta/chat/completions?provider=GeminiPro:streamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:62794 - "POST /v1beta/models/gemini-pro?provider=GeminiPro:streamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:62800 - "POST /v1/models/gemini-pro?provider=GeminiPro:streamGenerateContent?key=xxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 405 Method Not Allowed
FYI, I use this app - https://github.com/pot-app/pot-desktop
Why you want use the g4f url? The g4f API has the OpenAi spec. You can use the official url.
In my country, GeminiPro is geo-banned by Google, so I need to use a VPN that works quite well with my home internet provider. But when I'm away from home and using mobile internet or a WiFi hotspot, it often happens that this VPN, being external to my country, gets blocked. Therefore, I'm forced to connect to my home server and use it as a proxy. For some unknown reason to me, these apps, which I use with AI providers, perform quite unstable when I set their global settings to work through my home server as an internet proxy. But if I use direct connection with g4f, for example, OpenAI works perfect. Thus I decided to set GeminiPro to work through g4f as well.
You can set my url in the OpenAI fields. If your chat service use it in same way like OpenAI, it should works. I rename the roles and some things in the provider.
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.
Can you remove the streamGenerateContent from the request url?
Can you remove the streamGenerateContent from the request url?
I did it, but with the same result.
Sorry. I answered to the thread. You have to add the url in OpenAI, because we serve a OpenAI API.
Yeah, I understand. But adding the URL to OpenAI is a bit problematic, as OpenAI is already taken and additional instances are not allowed.
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.