gpt4free
gpt4free copied to clipboard
First request to API hangs
I run API using
g4f api --bind localhost:1342 --ignored-providers Bing FreeChatgpt Liaobots
Got
Starting server... [g4f v-0.0.0] INFO: Started server process [12867] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:1342 (Press CTRL+C to quit) New g4f version: 0.3.1.0 (current: 0.0.0) | pip install -U g4f
All ok at this time.
Next i make first example request to this API and get infinite hang.
Debug output:
Exception in callback Task.task_wakeup(<Future finished result=None>) handle: <Handle Task.task_wakeup(<Future finished result=None>)> Traceback (most recent call last): File "/usr/lib/python3.11/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) RuntimeError: Cannot enter into task <Task pending name='Task-1' coro=<Server.serve() running at /usr/lib/python3/dist-packages/uvicorn/server.py:80> wait_for=<Future finished result=None> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]> while another task <Task pending name='starlette.responses.StreamingResponse.__call__.<locals>.wrap' coro=<StreamingResponse.__call__.<locals>.wrap() running at /usr/lib/python3/dist-packages/starlette/responses.py:273> cb=[TaskGroup._spawn.<locals>.task_done() at /usr/lib/python3/dist-packages/anyio/_backends/_asyncio.py:661]> is being executed. Exception in callback Task.task_wakeup(<Future finished result=True>) handle: <Handle Task.task_wakeup(<Future finished result=True>)> Traceback (most recent call last): File "/usr/lib/python3.11/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) RuntimeError: Cannot enter into task <Task pending name='Task-5' coro=<RequestResponseCycle.run_asgi() running at /usr/lib/python3/dist-packages/uvicorn/protocols/http/h11_impl.py:366> wait_for=<Future finished result=True> cb=[set.discard()]> while another task <Task pending name='starlette.responses.StreamingResponse.__call__.<locals>.wrap' coro=<StreamingResponse.__call__.<locals>.wrap() running at /usr/lib/python3/dist-packages/starlette/responses.py:273> cb=[TaskGroup._spawn.<locals>.task_done() at /usr/lib/python3/dist-packages/anyio/_backends/_asyncio.py:661]> is being executed.
Than I interrupt this request on client side, not touching server, and make the same request again.
Got answer, all ok.
So, every first request hangs.
Environment
- Used latest g4f from github repo, satisfied all dependencies
Hey, which request are you trying to make? Just so you know, none of the GPT-4 models work if you don't include Bing.
v1 gpt-3.5-turbo Say this is test .. I'm sure it is uvicorn-related error. Using third-party stable chatbot client, works fine on previous versions of g4f.
An issue with uvicorn and uvloops has been resolved.