llm-web-api
llm-web-api copied to clipboard
将ChatGPT网页转换成API接口。实现Cloudflare 5s盾破解,Arkose FunCaptcha验证码破解,邮箱账号自动登录,支持GPT-4o模型、多轮对话、高速流式输出、并完全兼容 ChatGPT接口。ChatGPT Free Web API.
LLM Web API
中文文档
ChatGPT Web Page to API interface.
Features
- Bypass
Cloudflarechallenge - Solve Funcaptcha by
Capsolver - Use without login (supports gpt3.5 model)
- Auto-login with email account (supports gpt3.5, gpt-4o, gpt-4 models)
- Auto re-login after session expiration
- High-speed streaming output
- Model switching
- Multi-turn conversations
- Dynamically display supported models
- Support for both event-stream and websocket responses
Compatible with the ChatGPT API.
Usage
Only Docker images are supported, The current code is not up to date.
Only Docker images are supported, The current code is not up to date.
Only Docker images are supported, The current code is not up to date.
Docker
Run with Docker
docker run --name llm-web-api --rm -it -p 5000:5000 adryfish/llm-web-api
Docker compose
See detailed configuration below for environment variables.
version: '3.8'
services:
llm-web-api:
image: adryfish/llm-web-api
container_name: llm-web-api
ports:
- "5000:5000"
volumes:
# Browser data. Configure if you want to retain browser login information.
- ./browser_data:/app/browser_data
environment:
# PROXY_SERVER: "" # Proxy server address
# USER_AGENT: "" # Browser User-Agent
# OPENAI_LOGIN_TYPE: "" # Login Type,nologin or email
# OPENAI_LOGIN_EMAIL: "" # Login email
# OPENAI_LOGIN_PASSWORD: "" # Login password
restart: unless-stopped
Environment
All environment variables are optional. Regarding the CAPSOLVER_API_KEY, you do not need to fill it out unless you actually encounter a FunCaptcha.
| variable | description | default |
|---|---|---|
| PROXY_SERVER | Proxy server address | None |
| USER_AGENT | User-Agent | Browser default |
| BROWSER_DATA | Browser data storage directory | ./browser_data |
| OPENAI_LOGIN_TYPE | ChatGPT login type, nologin or email | nologin |
| OPENAI_LOGIN_EMAIL | Email account for email login type | None |
| OPENAI_LOGIN_PASSWORD | Password for email login type | None |
| FUNCAPTCHA_PROVIDER | Provider name for funcaptcha | capsolver |
| CAPSOLVER_API_KEY | API Key for Capsolver | None |
API
Currently supports the OpenAI-compatible /v1/chat/completions API, which can be accessed using OpenAI or other compatible clients.
Chat completion
Chat completion API,compatible with Openai chat-completions-api。
POST /v1/chat/completions
Request:
{
// If you are no-login user, use gpt-3.5-turbo or gpt-4o-mini
// If you are a free user, use gpt-3.5-turbo, gpt-4o-mini or gpt-4o
// If you are a subscribed user, use gpt-3.5-turbo, gpt-4o-mini, gpt-4o, or gpt-4 for the model name.
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Hello"
}
],
// If using SSE stream, set to true, default is false
"stream": false
}
Response:
{
"id": "chatcmpl-ZklDQbSRpTI5gzb8zzctb6fB3YDW",
"model": "gpt-4o",
"object": "chat.completion",
"choices": [
{
"message": {
"role": "assistant",
"content": "Hi there! How can I assist you today?"
},
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1,
"completion_tokens": 1,
"total_tokens": 2
},
"created": 1716305953
}
Examples
Using OpenAI Official Library
Python
import openai
openai.api_key = 'anything'
openai.base_url = "http://localhost:5000/v1/"
completion = openai.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Hello"},
],
)
print(completion.choices[0].message.content)
Node.js
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: "anything",
baseURL: "http://localhost:5000/v1/",
});
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Echo Hello' }],
model: 'gpt-4o-mini',
});
console.log(chatCompletion.choices[0].message.content);
Notes
Nginx config
If you are using Nginx as a reverse proxy for llm-web-api, add the following configuration to optimize the streaming output and improve the user experience.
# Disable proxy buffering. When set to off, Nginx sends client requests to the backend server immediately and sends responses back to the client immediately.
proxy_buffering off;
# Enable chunked transfer encoding. This allows the server to send data in chunks for dynamically generated content without knowing the size of the content in advance.
chunked_transfer_encoding on;
# Enable TCP_NOPUSH, which tells Nginx to send data as efficiently as possible before sending data packets to the client. This is often used with sendfile to improve network efficiency.
tcp_nopush on;
# Enable TCP_NODELAY, which tells Nginx not to delay sending data and to send small data packets immediately. In some cases, this can reduce network latency.
tcp_nodelay on;
# Set the keepalive timeout, here set to 120 seconds. If there is no further communication between the client and the server within this period, the connection will be closed.
keepalive_timeout 120;
Token
Since inference is not performed on the llm-web-api side, token statistics will be returned as fixed number!!!!!
Disclaimer
This project is for learning and research purposes only and is not intended for commercial use. You should be aware that using this project may violate related user agreements and understand the associated risks. We are not responsible for any losses resulting from the use of this project.
Reference
- MediaCrawler: https://github.com/NanmiCoder/MediaCrawler
- Bypass Cloudflare: https://github.com/sarperavci/CloudflareBypassForScraping
- ChatGPT Reverse Engine: https://github.com/PawanOsman/ChatGPT