gpt4free
gpt4free copied to clipboard
An 'Internal Server Error' is prompted when calling API interface 1337
The API interface address is: http://127.0.0.1:1337/v1/chat/completions
API request parameters is:
{ "model": "gpt-3.5-turbo-16k", "stream": "False", "messages": [ {"role": "assistant", "content": "Hello"} ] }
The result returned by calling the API is:'Internal Server Error
stream: false,
not:
stream: "False"
and your Json is not valid or have too many spaces or line breaks.
{"model": "gpt-3.5-turbo-16k","stream": "false","messages":[{"role": "assistant", "content": "Hello"}]}
No matter if it's stream: "False" or stream: "false", with or without the related spaces, the JSON itself is not affected by spaces. I have tried all the methods, but none of them worked. Has anyone else encountered this issue before?
Likely, the model you want to use is not functional
Same issue! any update on this ?
What do you see in terminal / the error logs? Try use gpt-4 or a another provider/model. Leave the api_key blank.
@hlohaus Hey i tried gpt-4 but got same error.
i found that some backend error as below
Do you use workers? Can you uninstall uvloop?
No luck !
Let me explain , how I'm using it.
Im running g4f in docker below is the Dockerfile: `
# Use the official Python image
FROM python:3.9
# Install g4f with all dependencies
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install g4f[all]
RUN echo y | pip uninstall uvloop
# Command to run the g4f API
#CMD ["python", "-m", "g4f.api.run"]
# Expose port 8080 to map to the container's port 1337
EXPOSE 80
# Print ngrok URL using an ENTRYPOINT script (for development only)
ENTRYPOINT ["/bin/sh", "-c", "python -m g4f.api.run"]
``` `
#docker build -t g4f-api .
#docker run -p 80:1337 -d g4f-api
Same is reported here : https://github.com/xtekky/gpt4free/issues/1928
Same issue maybe async loop version ?
Hey @MockArch, why aren't you using our image?
I have the same problem
the same here
Here is the requirements file exported from the Docker I used recently. This should help your. I have already rebuilt my own image.
Have the same problem. After a while i get an Internal Server Error (500) back from the API. Ubuntu 24.04 Running via Docker
First some requests are working and after a while it stops
@hlohaus Hey i tried gpt-4 but got same error.
i found that some backend error as below
same here: i just grabbed a new image, and it all broke. suspect async. what is the official solution?
here is my error text: "2024-05-14 13:21:52.854 INFO: HTTP Request: POST http://localhost:1337/v1/chat/completions "HTTP/1.1 200 OK" 2024-05-14 13:21:52.860 ERROR: Error while streaming response Traceback (most recent call last): File "C:\Users\JohnWick3\IdeaProjects\discord-llm-chatbot\llmcord.py", line 322, in on_message async for chunk in await acompletion(**kwargs): File "C:\Users\JohnWick3.conda\envs\DiscoBot\Lib\site-packages\litellm\utils.py", line 9973, in anext raise e File "C:\Users\JohnWick3.conda\envs\DiscoBot\Lib\site-packages\litellm\utils.py", line 9857, in anext async for chunk in self.completion_stream: File "C:\Users\JohnWick3\AppData\Roaming\Python\Python311\site-packages\openai_streaming.py", line 150, in aiter async for item in self._iterator: File "C:\Users\JohnWick3\AppData\Roaming\Python\Python311\site-packages\openai_streaming.py", line 181, in stream raise APIError( openai.APIError: RetryProviderError: RetryProvider failed: OpenaiChat: ValueError: Can't patch loop of type <class 'uvloop.Loop'> ChatgptNext: ValueError: Can't patch loop of type <class 'uvloop.Loop'> Feedough: ValueError: Can't patch loop of type <class 'uvloop.Loop'> You: ValueError: Can't patch loop of type <class 'uvloop.Loop'> Aichatos: ValueError: Can't patch loop of type <class 'uvloop.Loop'> Koala: ValueError: Can't patch loop of type <class 'uvloop.Loop'> FreeGpt: ValueError: Can't patch loop of type <class 'uvloop.Loop'> Cnote: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
any help is definitely appreciated!
Can you uninstall uvloop or use a provider directly?
i can use the g4f web chat on port 8080 no problem... also an old image (quite old 2.7.1) works fine. that was the only other docker build i had handy. i think it was fine, up until the last version or two though. i'm pretty sure i need async, but am not well versed enough yet to know the underlyings of uvloop, etc.
other than this, ive been using this project very well. zero other problems.
UPDATE: I found another image 3.0.7 and that one also still works.
FURTHER UPDATE: i am using docker desktop on a windows machine. here is a copy of the WORKING g4f version: 3.0.7 docker run command if useful:
docker run --hostname=e466ba3ab3fa --user=1000 --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/g4f/.local/bin --env=DEBIAN_FRONTEND=noninteractive --env=DEBCONF_NONINTERACTIVE_SEEN=true --env=SEL_USER=g4f --env=SEL_UID=1000 --env=SEL_GID=1000 --env=HOME=/home/g4f --env=TZ=UTC --env=SEL_DOWNLOAD_DIR=/Downloads --env=SE_BIND_HOST=false --env=SE_REJECT_UNSUPPORTED_CAPS=false --env=SE_OTEL_JAVA_GLOBAL_AUTOCONFIGURE_ENABLED=true --env=SE_OTEL_TRACES_EXPORTER=otlp --env=LANG_WHICH=en --env=LANG_WHERE=US --env=ENCODING=UTF-8 --env=LANGUAGE=en_US.UTF-8 --env=LANG= --env=SE_ENABLE_BROWSER_LEFTOVERS_CLEANUP=false --env=SE_BROWSER_LEFTOVERS_INTERVAL_SECS=3600 --env=SE_BROWSER_LEFTOVERS_PROCESSES_SECS=7200 --env=SE_BROWSER_LEFTOVERS_TEMPFILES_DAYS=1 --env=SE_DRAIN_AFTER_SESSION_COUNT=0 --env=SE_NODE_MAX_SESSIONS=1 --env=SE_NODE_SESSION_TIMEOUT=300 --env=SE_NODE_OVERRIDE_MAX_SESSIONS=false --env=SE_NODE_HEARTBEAT_PERIOD=30 --env=SE_OTEL_SERVICE_NAME=selenium-node --env=SE_OFFLINE=true --env=SE_SCREEN_WIDTH=1850 --env=SE_SCREEN_HEIGHT=1020 --env=SE_SCREEN_DEPTH=24 --env=SE_SCREEN_DPI=96 --env=SE_START_XVFB=true --env=SE_START_VNC=true --env=SE_START_NO_VNC=true --env=SE_NO_VNC_PORT=7900 --env=SE_VNC_PORT=5900 --env=DISPLAY=:99.0 --env=DISPLAY_NUM=99 --env=CONFIG_FILE=/opt/selenium/config.toml --env=GENERATE_CONFIG=true --env=DBUS_SESSION_BUS_ADDRESS=/dev/null --env=G4F_VERSION= --env=G4F_USER=g4f --env=G4F_USER_ID=1000 --env=G4F_NO_GUI= --env=PYTHONUNBUFFERED=1 --env=G4F_DIR=/app --env=G4F_LOGIN_URL=http://localhost:7900/?autoconnect=1&resize=scale&password=secret --env=SE_DOWNLOAD_DIR=/home/g4f/Downloads --volume=/mnt/c/gpt4free2:/app:rw --network=gpt4free2_default --workdir=/app -p 1337:1337 -p 7900:7900 -p 8080:8080 --restart=no --label='authors=' --label='com.docker.compose.config-hash=3f07d5e95a66f9d2c685d3a54b8dee7f91e653f1960fa8792830c137000a5f94' --label='com.docker.compose.container-number=1' --label='com.docker.compose.depends_on=' --label='com.docker.compose.image=sha256:57e2a18f46603015825882402d1a17929ebb5d1e13464991ede771ab4fce4211' --label='com.docker.compose.oneoff=False' --label='com.docker.compose.project=gpt4free2' --label='com.docker.compose.project.config_files=/mnt/c/gpt4free2/docker-compose.yml' --label='com.docker.compose.project.working_dir=/mnt/c/gpt4free2' --label='com.docker.compose.replace=963c48a32e503da8fb0aea7bdff6fcd0c31cd7bc4e673a1f14a91f603140ec0c' --label='com.docker.compose.service=gpt4free' --label='com.docker.compose.version=2.26.1' --label='desktop.docker.io/wsl-distro=Ubuntu' --runtime=runc -d hlohaus789/g4f:latest
I fixed the uvloop issue.
