free-claude-code icon indicating copy to clipboard operation
free-claude-code copied to clipboard

Use claude-code for free in the terminal, VSCode extension or discord like OpenClaw (voice supported)

๐Ÿค– Free Claude Code

Use Claude Code CLI, VS Code, JetBrains ACP, or chat bots through your own Anthropic-compatible proxy.

License: MIT Python 3.14 uv Tested with Pytest Type checking: Ty Code style: Ruff Logging: Loguru

Free Claude Code routes Anthropic Messages API traffic from Claude Code to NVIDIA NIM, Kimi, Wafer, OpenRouter, DeepSeek, LM Studio, llama.cpp, or Ollama. It keeps Claude Code's client-side protocol stable while letting you choose free, paid, or local models.

Quick Start ยท Providers ยท Clients ยท Configuration ยท Development

Free Claude Code in action

Star History

What You Get

  • Drop-in proxy for Claude Code's Anthropic API calls.
  • Ten provider backends: NVIDIA NIM, Kimi, Wafer, OpenRouter, DeepSeek, LM Studio, llama.cpp, Ollama, OpenCode Zen, and Z.ai.
  • Per-model routing: send Opus, Sonnet, Haiku, and fallback traffic to different providers.
  • Native Claude Code /model picker support through the proxy's /v1/models endpoint (Claude Code must opt in to Gateway model discovery; see Model Picker).
  • Streaming, tool use, reasoning/thinking block handling, and local request optimizations.
  • Optional Discord or Telegram bot wrapper for remote coding sessions.
  • Optional Usage through the VSCode extension.
  • Optional voice-note transcription through local Whisper or NVIDIA NIM.
  • Local Admin UI at /admin to edit supported proxy settings, validate changes, and check providers (loopback access only).

Quick Start

1. Install the latest version of Claude Code

npm install -g @anthropic-ai/claude-code

2. Install Runtime Requirements

Install the latest version of uv and Python 3.14.

macOS/Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh
uv self update
uv python install 3.14

Windows PowerShell:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
uv self update
uv python install 3.14

3. Get An NVIDIA NIM API Key

Create a free NVIDIA NIM API key, then keep it ready for the Admin UI setup step.

See NVIDIA NIM provider setup.

4. Install The Proxy

uv tool install --force git+https://github.com/Alishahryar1/free-claude-code.git

Use the same command to update to the latest version.

5. Start The Proxy

fcc-server

After startup, the terminal prints the proxy and admin URLs:

Server URL: http://127.0.0.1:8082
Admin UI: http://127.0.0.1:8082/admin

Many terminals make these clickable. Use your configured PORT if it is not 8082.

6. Open The Admin UI And Configure NVIDIA NIM

Open the Admin UI URL from the terminal output.

Local admin UI for proxy settings

Paste your NVIDIA NIM API key into NVIDIA_NIM_API_KEY, then click Validate and Apply.

The default model is already set to nvidia_nim/z-ai/glm4.7. You can change it later from the same Admin UI.

7. Run Claude Code

fcc-claude

fcc-claude reads the current configured port and auth token each time it starts, sets the Claude Code environment variables, and then launches the real claude command.

Choose A Provider

Pick one provider, enter its key or local URL in the Admin UI, and set MODEL to a provider-prefixed model slug. MODEL is the fallback. MODEL_OPUS, MODEL_SONNET, and MODEL_HAIKU can override routing for Claude Code's model tiers.

1. NVIDIA NIM

Get a key at build.nvidia.com/settings/api-keys.

In the Admin UI, paste it into NVIDIA_NIM_API_KEY. The default MODEL is nvidia_nim/z-ai/glm4.7.

Popular examples:

  • nvidia_nim/z-ai/glm4.7
  • nvidia_nim/z-ai/glm5
  • nvidia_nim/moonshotai/kimi-k2.5
  • nvidia_nim/minimaxai/minimax-m2.5

Browse models at build.nvidia.com.

2. Kimi

Get a key at platform.moonshot.ai/console/api-keys.

In the Admin UI, paste it into KIMI_API_KEY, then set MODEL to a Kimi slug such as kimi/kimi-k2.5.

Browse models at platform.moonshot.ai.

3. Wafer

Get a key from wafer.ai. In the Admin UI, paste it into WAFER_API_KEY, then set MODEL to a Wafer Pass model such as wafer/DeepSeek-V4-Pro.

Popular examples:

  • wafer/DeepSeek-V4-Pro
  • wafer/MiniMax-M2.7
  • wafer/Qwen3.5-397B-A17B
  • wafer/GLM-5.1

This provider uses Wafer's Anthropic-compatible endpoint at https://pass.wafer.ai/v1/messages.

4. OpenRouter

Get a key at openrouter.ai/keys.

In the Admin UI, paste it into OPENROUTER_API_KEY, then set MODEL to an OpenRouter slug such as open_router/stepfun/step-3.5-flash:free.

Browse all models or free models.

5. DeepSeek

Get a key at platform.deepseek.com/api_keys.

In the Admin UI, paste it into DEEPSEEK_API_KEY, then set MODEL to a DeepSeek slug such as deepseek/deepseek-chat.

This provider uses DeepSeek's Anthropic-compatible endpoint, not the OpenAI chat-completions endpoint.

6. LM Studio

Start LM Studio's local server and load a model. In the Admin UI, keep or update LM_STUDIO_BASE_URL, then set MODEL to the model identifier shown by LM Studio, prefixed with lmstudio/.

Prefer models with tool-use support for Claude Code workflows.

7. llama.cpp

Start llama-server with an Anthropic-compatible /v1/messages endpoint and enough context for Claude Code requests.

In the Admin UI, keep or update LLAMACPP_BASE_URL, then set MODEL to the local model slug, prefixed with llamacpp/.

For local coding models, context size matters. If llama.cpp returns HTTP 400 for normal Claude Code requests, increase --ctx-size and verify the model/server build supports the requested features.

8. Ollama

Run Ollama and pull a model:

ollama pull llama3.1
ollama serve

In the Admin UI, keep or update OLLAMA_BASE_URL, then set MODEL to the same tag shown by ollama list, prefixed with ollama/.

OLLAMA_BASE_URL is the Ollama server root; do not append /v1. Example model slugs include ollama/llama3.1 and ollama/llama3.1:8b.

9. OpenCode Zen

Get an API key at opencode.ai/auth.

In the Admin UI, paste it into OPENCODE_API_KEY, then set MODEL to an OpenCode Zen model slug such as opencode/gpt-5.3-codex.

OpenCode Zen is a curated model gateway that provides access to models from Anthropic, OpenAI, Google, DeepSeek, and more through a single API key and OpenAI-compatible endpoint at https://opencode.ai/zen/v1.

Popular examples:

  • opencode/gpt-5.3-codex
  • opencode/claude-sonnet-4
  • opencode/deepseek-v4-flash-free (free)
  • opencode/gemini-3-flash
  • opencode/big-pickle (free)
  • opencode/glm-5.1

Browse available models at opencode.ai.

10. Z.ai

Get an API key at Z.ai/manage-apikey/apikey-list.

In the Admin UI, paste it into ZAI_API_KEY, then set MODEL to a Z.ai model slug such as zai/glm-5.1.

Z.ai provides GLM models through the OpenAI-compatible Coding Plan endpoint at https://api.z.ai/api/coding/paas/v4.

Popular examples:

  • zai/glm-5.1
  • zai/glm-5-turbo

Browse models at Z.ai.

11. Mix Providers By Model Tier

Each model tier can use a different provider by setting MODEL_OPUS, MODEL_SONNET, and MODEL_HAIKU in the Admin UI. Leave a tier blank to inherit MODEL.

For example, you can route Opus to nvidia_nim/moonshotai/kimi-k2.5, Sonnet to open_router/deepseek/deepseek-r1-0528:free, Haiku to lmstudio/unsloth/GLM-4.7-Flash-GGUF, and keep the fallback MODEL on zai/glm-5.1.

Connect Claude Code

1. Claude Code CLI

For terminal use, prefer the installed launcher:

fcc-claude

Keep fcc-server running while you work. The Admin UI manages proxy config, restarts the server when runtime settings change, and fcc-claude reads the current Admin UI-managed port and auth token every time it starts.

2. VS Code Extension

Open Settings, search for claude-code.environmentVariables, choose Edit in settings.json, and add:

"claudeCode.environmentVariables": [
  { "name": "ANTHROPIC_BASE_URL", "value": "http://localhost:8082" },
  { "name": "ANTHROPIC_AUTH_TOKEN", "value": "freecc" },
  { "name": "CLAUDE_CODE_ENABLE_GATEWAY_MODEL_DISCOVERY", "value": "1" }
]

Reload the extension. If the extension shows a login screen, choose the Anthropic Console path once; the local proxy still handles model traffic after the environment variables are active.

3. JetBrains ACP

Edit the installed Claude ACP config:

  • Windows: C:\Users\%USERNAME%\AppData\Roaming\JetBrains\acp-agents\installed.json
  • Linux/macOS: ~/.jetbrains/acp.json

Set the environment for acp.registry.claude-acp:

"env": {
  "ANTHROPIC_BASE_URL": "http://localhost:8082",
  "ANTHROPIC_AUTH_TOKEN": "freecc",
  "CLAUDE_CODE_ENABLE_GATEWAY_MODEL_DISCOVERY": "1"
}

Restart the IDE after changing the file.

4. Model Picker

Claude Code model picker showing gateway models

Optional Integrations

1. Discord And Telegram Bots

The bot wrapper runs Claude Code sessions remotely, streams progress, supports reply-based conversation branches, and can stop or clear tasks.

Discord minimum config:

MESSAGING_PLATFORM="discord"
DISCORD_BOT_TOKEN="your-discord-bot-token"
ALLOWED_DISCORD_CHANNELS="123456789"
CLAUDE_WORKSPACE="./agent_workspace"
ALLOWED_DIR="C:/Users/yourname/projects"

Create the bot in the Discord Developer Portal, enable Message Content Intent, and invite it with read/send/history permissions.

Telegram minimum config:

MESSAGING_PLATFORM="telegram"
TELEGRAM_BOT_TOKEN="123456789:ABC..."
ALLOWED_TELEGRAM_USER_ID="your-user-id"
CLAUDE_WORKSPACE="./agent_workspace"
ALLOWED_DIR="C:/Users/yourname/projects"

Get a token from @BotFather and your user ID from @userinfobot.

Useful commands:

  • /stop cancels a task; reply to a task message to stop only that branch.
  • /clear resets sessions; reply to clear one branch.
  • /stats shows session state.

2. Voice Notes

Voice notes work on Discord and Telegram. Choose one backend:

uv sync --extra voice_local
uv sync --extra voice
uv sync --extra voice --extra voice_local
VOICE_NOTE_ENABLED=true
WHISPER_DEVICE="cpu"          # cpu | cuda | nvidia_nim
WHISPER_MODEL="base"
HF_TOKEN=""

Use WHISPER_DEVICE="nvidia_nim" with the voice extra and NVIDIA_NIM_API_KEY for NVIDIA-hosted transcription.

Configuration Reference

.env.example is the canonical list of variables. The sections below are the ones most users change.

1. Manual .env Setup (Headless)

Use this only if you prefer file-based config or are running headless. The Admin UI is easier for first setup.

cp .env.example .env

Example for NVIDIA NIM:

NVIDIA_NIM_API_KEY="nvapi-your-key"
MODEL="nvidia_nim/z-ai/glm4.7"
ANTHROPIC_AUTH_TOKEN="freecc"

Config precedence is repo .env, then ~/.config/free-claude-code/.env, then FCC_ENV_FILE when set. ANTHROPIC_AUTH_TOKEN can be any local secret; pass the same value to Claude Code.

2. Model Routing

MODEL="nvidia_nim/z-ai/glm4.7"
MODEL_OPUS=
MODEL_SONNET=
MODEL_HAIKU=
ENABLE_MODEL_THINKING=true
ENABLE_OPUS_THINKING=
ENABLE_SONNET_THINKING=
ENABLE_HAIKU_THINKING=

Blank per-tier values inherit the fallback. Blank thinking overrides inherit ENABLE_MODEL_THINKING.

3. Provider Keys And URLs

NVIDIA_NIM_API_KEY=""
OPENROUTER_API_KEY=""
DEEPSEEK_API_KEY=""
WAFER_API_KEY=""
OPENCODE_API_KEY=""
ZAI_API_KEY=""
LM_STUDIO_BASE_URL="http://localhost:1234/v1"
LLAMACPP_BASE_URL="http://localhost:8080/v1"
OLLAMA_BASE_URL="http://localhost:11434"

Proxy settings are per provider:

NVIDIA_NIM_PROXY=""
OPENROUTER_PROXY=""
LMSTUDIO_PROXY=""
LLAMACPP_PROXY=""
WAFER_PROXY=""
OPENCODE_PROXY=""
ZAI_PROXY=""

4. Rate Limits And Timeouts

PROVIDER_RATE_LIMIT=1
PROVIDER_RATE_WINDOW=3
PROVIDER_MAX_CONCURRENCY=5
HTTP_READ_TIMEOUT=120
HTTP_WRITE_TIMEOUT=10
HTTP_CONNECT_TIMEOUT=10

Use lower limits for free hosted providers; local providers can usually tolerate higher concurrency if the machine can handle it.

5. Security And Diagnostics

ANTHROPIC_AUTH_TOKEN=
LOG_RAW_API_PAYLOADS=false
LOG_RAW_SSE_EVENTS=false
LOG_API_ERROR_TRACEBACKS=false
LOG_RAW_MESSAGING_CONTENT=false
LOG_RAW_CLI_DIAGNOSTICS=false
LOG_MESSAGING_ERROR_DETAILS=false

Raw logging flags can expose prompts, tool arguments, paths, and model output. Keep them off unless you are debugging locally.

Structured TRACE rows append fields such as "trace": true, stage, event, and source and include conversation context needed to follow Claude Code flows end-to-end. Dictionary keys resembling credentials (for example api_key / authorization values nested in structured payloads) are redacted; arbitrary prose you type into prompts may still appear verbatim.

6. Local Web Tools

ENABLE_WEB_SERVER_TOOLS=true
WEB_FETCH_ALLOWED_SCHEMES=http,https
WEB_FETCH_ALLOW_PRIVATE_NETWORKS=false

These tools perform outbound HTTP from the proxy. Keep private-network access disabled unless you are in a controlled lab environment.

How It Works

Free Claude Code request flow architecture

Diagram source: assets/how-it-works.mmd.

Important pieces:

  • FastAPI exposes Anthropic-compatible routes such as /v1/messages, /v1/messages/count_tokens, and /v1/models.
  • Model routing resolves the Claude model name to MODEL_OPUS, MODEL_SONNET, MODEL_HAIKU, or MODEL.
  • NIM, OpenCode Zen, Z.ai use OpenAI chat streaming translated into Anthropic SSE.
  • Wafer, OpenRouter, DeepSeek, LM Studio, llama.cpp, and Ollama use Anthropic Messages style transports.
  • The proxy normalizes thinking blocks, tool calls, token usage metadata, and provider errors into the shape Claude Code expects.
  • Request optimizations answer trivial Claude Code probes locally to save latency and quota.

Development

1. Project Structure

free-claude-code/
โ”œโ”€โ”€ server.py              # ASGI entry point
โ”œโ”€โ”€ api/                   # FastAPI routes, service layer, routing, optimizations
โ”œโ”€โ”€ core/                  # Shared Anthropic protocol helpers and SSE utilities
โ”œโ”€โ”€ providers/             # Provider transports, registry, rate limiting
โ”œโ”€โ”€ messaging/             # Discord/Telegram adapters, sessions, voice
โ”œโ”€โ”€ cli/                   # Package entry points and Claude process management
โ”œโ”€โ”€ config/                # Settings, provider catalog, logging
โ””โ”€โ”€ tests/                 # Unit and contract tests

2. Run From Source

Use this path if you are developing or want to run directly from a checkout:

git clone https://github.com/Alishahryar1/free-claude-code.git
cd free-claude-code
uv run uvicorn server:app --host 0.0.0.0 --port 8082

3. Commands

uv run ruff format
uv run ruff check
uv run ty check
uv run pytest

Run them in that order before pushing. CI enforces the same checks.

4. Package Scripts

pyproject.toml installs:

  • fcc-server: starts the proxy with configured host and port.
  • fcc-init: optional file-based config scaffold at ~/.config/free-claude-code/.env.
  • fcc-claude: launches Claude Code with the configured local proxy URL, auth token, and model discovery flag.
  • free-claude-code: compatibility alias for fcc-server.

5. Extending

  • Add OpenAI-compatible providers by extending OpenAIChatTransport.
  • Add Anthropic Messages providers by extending AnthropicMessagesTransport.
  • Register provider metadata in config.provider_catalog and factory wiring in providers.registry.
  • Add messaging platforms by implementing the MessagingPlatform interface in messaging/.

Contributing

  • Report bugs and feature requests in Issues.
  • Keep changes small and covered by focused tests.
  • Do not open Docker integration PRs.
  • Do not open README change PRs just open an issue for it.
  • Run the full check sequence before opening a pull request.
  • The syntax except X, Y is brought back in python 3.14 final version (not in 3.14 alpha). Keep in mind before opening PRs.

License

MIT License. See LICENSE for details.