在受限云环境(ClawCloud Run )中构建弹性代理服务:3X-UI、Cloudflare WARP 与 Tunnel 的深度集成实践
在受限云环境中构建弹性代理服务:3X-UI、Cloudflare WARP 与 Tunnel 的深度集成实践
导言:驾驭云原生部署的复杂性
docker 镜像已经制作好了
部署的时候通过配置 configmaps 把
supervisord.conf挂载到/etc/supervisor/conf.d/supervisord.conf即可.
部署参数 :
Command 填写"bash -c"
Arguments 填写"rm -frv /run/dbus/pid && mkdir -pv /var/log/supervisor/ && /supervisord/supervisord_static -c /etc/supervisor/conf.d/supervisord.conf"
docker.cnb.cool/masx200/docker_mirror/ubuntu-3x-ui-warp:2025-12-10-11-22-49
cloudflare/cloudflared:latest
在当今高度互联的数字时代,对安全、高效且灵活的网络代理与隧道服务的需求日益增长。无论是为了保障数据传输的隐私、突破地域限制,还是为了构建复杂的网络架构,开发者与系统管理员都在不断寻求更优的解决方案。3X-UI,作为一款强大的 Xray 核心管理面板,因其对多种代理协议的广泛支持和用户友好的Web界面,受到了众多用户的青睐 [10] [11]。它允许用户通过一个简洁的界面轻松配置和管理 VPN 及代理服务器,支持包括 VLESS、VMess、Trojan、Shadowsocks 等在内的主流协议,并具备流量统计、用户管理、节点订阅等高级功能 [12]。然而,将如此强大的工具部署到特定的云环境中,尤其是那些存在诸多限制的平台,往往需要精心的规划与巧妙的变通。ClawCloud Run (run.claw.cloud) 便是这样一个平台,它提供了一个类似 Kubernetes 的高性能、轻量级云原生部署环境,集成了 GitOps 工作流,并原生支持 Docker/Kubernetes [1]。尽管其承诺了高达 99.975% 的正常运行时间 SLA 和卓越的数据可靠性 [2],但其免费计划或某些特定配置下存在的一些固有限制,为直接部署复杂应用如 3X-UI 带来了挑战。这些限制主要包括:缺乏特权操作权限(无法创建 TUN/TAP 等虚拟网络设备)、不支持 IPv6 网络、以及域名绑定服务可能出现的长时间无响应或延迟问题。这些障碍,若不加以妥善处理,将直接阻碍 3X-UI 的正常运行及其核心功能的发挥。本报告旨在深入剖析在 ClawCloud Run 这类受限云平台上,如何通过综合运用 Docker 容器化技术、Cloudflare WARP 以及 Cloudflare Tunnel,成功部署并稳定运行 3X-UI 服务。我们将不仅提供一份操作指南,更致力于揭示其背后的设计哲学、技术选型考量、以及在克服环境限制过程中所展现的系统工程思想。报告将详细阐述环境限制的本质及其对部署方案的影响,探讨容器化部署如何作为一种有效的隔离与权限规避手段,分析 Cloudflare WARP 如何在受限网络环境中提供稳定的出站连接,并阐释 Cloudflare Tunnel 如何巧妙地绕过平台域名绑定的瓶颈。更进一步,本报告将聚焦于服务管理的复杂性与稳定性,深入探讨使用 supervisord 作为进程管理器来协调容器内多个依赖服务(如 D-Bus、WARP 服务、3X-UI 本身)的优势与实践细节。通过对整个部署架构的层层解构和对关键配置的细致解读,我们期望为读者呈现一幅在资源受限条件下构建高可用、高弹性代理服务的完整蓝图,并为面临类似挑战的实践者提供具有深度洞察的参考与借鉴。这不仅是一个技术教程,更是一次关于问题解决、架构设计与系统优化的深度案例分析。
破解环境枷锁:ClawCloud Run 平台限制与应对策略深度解析
在任何复杂的部署任务中,首要且至关重要的一步是对目标运行环境的特性与限制进行彻底的评估与理解。ClawCloud Run,作为一个旨在抽象 Kubernetes 复杂性并提供简化云原生应用部署体验的平台 [[8](https://docs.run.claw.cloud/clawcloud-run/architecture/system-architecture)],其设计理念在带来便利的同时,也引入了一些必须正视的约束条件。这些约束,特别是对于需要深度网络配置和系统权限的应用(如 3X-UI 这类代理服务器管理面板),构成了部署过程中的主要障碍。本章节将深入剖析这些核心限制,并逐一阐述我们为克服这些挑战所设计的应对策略,这些策略共同构成了后续整个部署方案的基石。首先,**无特权环境 (Unprivileged Environment)** 是 ClawCloud Run 这类平台最显著的特征之一。出于安全和多租户隔离的考虑,平台通常不允许容器内的进程获取宿主机的 root 权限或执行特定的特权操作。对于 3X-UI 而言,其依赖的 Xray 核心在某些配置或协议下可能需要创建 TUN/TAP 虚拟网络设备来实现数据包的转发与处理。在无特权环境中,直接创建这类设备通常是被禁止的,这将导致 Xray 核心无法正常启动或功能受限。面对这一挑战,**Docker 容器化部署**成为了我们的首选破局之策。通过精心构建 Docker 镜像,我们可以将 3X-UI 及其所有依赖项打包成一个独立的、可移植的单元。尽管容器本身可能运行在非特权模式下,但 Docker 的层叠文件系统和进程隔离机制为应用提供了一个相对独立的运行环境。更进一步,通过在 Dockerfile 中正确设置用户和权限,遵循 Docker 安全最佳实践,例如使用 `USER` 指令切换到非 root 用户运行应用 [[34](https://www.docker.com/blog/understanding-the-docker-user-instruction)],以及利用 Linux capabilities 机制进行细粒度的权限控制 [[35](https://blog.secureflag.com/2020/12/08/securing-the-docker-ecosystem-part-3-the-container-runtime)],我们可以在不破坏平台安全策略的前提下,为 3X-UI 创造一个可工作的运行时环境。在某些情况下,如果平台允许,可以尝试向容器添加特定的 Linux capabilities(如 `NET_ADMIN`)以支持必要的网络操作,但这通常比直接使用 `--privileged` 标志更为安全,后者应尽可能避免,因为它会赋予容器过多的权限,增加安全风险 [[36](https://www.trendmicro.com/en_us/research/19/l/why-running-a-privileged-container-in-docker-is-a-bad-idea.html)] [[37](https://sourcery.ai/vulnerabilities/docker-privileged-containers)]。其次,不支持 IPv6 是 ClawCloud Run 平台的另一个明确限制。虽然全球 IPv6 的部署日益广泛,但在某些云环境中,尤其是简化版或面向特定用户群体的平台,对 IPv6 的支持可能并不完整或完全缺失。3X-UI 及其底层的 Xray 核心在设计上支持 IPv6,如果配置不当,可能会尝试监听 IPv6 地址或进行 IPv6 路由,这在无 IPv6 环境下会导致不必要的错误或功能异常。因此,我们的应对策略是在所有相关配置中明确禁用 IPv6 功能。这包括在 3X-UI 的面板设置中,确保入站和出站规则不包含 IPv6 相关的选项;在 Xray 核心的配置文件中,移除或注释掉 IPv6 的监听和路由条目;以及在 Docker 容器的启动参数或 docker-compose.yml 文件中,通过系统级控制(如 sysctls)来禁用 IPv6。例如,可以在 docker-compose.yml 的服务定义中加入 sysctls: - net.ipv6.conf.all.disable_ipv6=1,以确保容器内部彻底禁用 IPv6。这种主动禁用的方式可以避免潜在的兼容性问题,并确保服务在纯 IPv4 环境中的稳定运行。这不仅是对平台限制的适应,也是一种在特定环境下简化网络配置、减少不确定性的有效手段。
第三,域名绑定延迟或无响应 的问题,据用户反馈,在 ClawCloud Run 上较为常见。传统的部署方式通常需要在云平台配置 DNS 记录,将域名指向服务器的公网 IP 地址,并可能需要在服务器端配置 Web 服务器(如 Nginx)进行反向代理。如果平台的域名绑定服务本身存在延迟或故障,这将直接影响服务的可访问性和部署效率。为了绕过这一潜在的瓶颈,我们引入了 Cloudflare Tunnel。Cloudflare Tunnel 是一种强大的反向代理技术,它允许用户将本地或私有网络中的服务安全地暴露到互联网,而无需在防火墙上开放入站端口或配置公网 IP 地址 [24]。其工作原理是在本地运行一个轻量级的 cloudflared 客户端,该客户端与 Cloudflare 的边缘网络建立一个出站的、仅限外发的加密隧道 [21]。然后,通过 Cloudflare 的 DNS 控制面板将指定域名的流量指向这个隧道。这样,所有对域名的访问请求都会先经过 Cloudflare 的全球网络,再通过隧道安全地转发到运行在 ClawCloud Run 上的 3X-UI 服务。这种方法不仅完全绕过了 ClawCloud Run 平台自身的域名绑定机制,还带来了额外的安全优势,如 DDoS 防护、SSL/TLS 终止等,这些都是 Cloudflare 网络提供的增值服务。
最后,网络限制与出站连接的稳定性 也是在云平台部署服务时需要考虑的因素。某些平台可能会对出站流量进行限制,或者由于网络路径复杂,导致出站连接不稳定或速度较慢。这对于需要主动连接外部网络的代理服务来说尤其重要。为了确保 3X-UI 服务拥有一个稳定且可靠的出站连接,我们集成了 Cloudflare WARP。Cloudflare WARP 是一个轻量级的客户端,它通过 WireGuard 或 MASQUE 协议将设备的流量安全地路由到 Cloudflare 的全球网络 [26]。通过在运行 3X-UI 的容器内部署并连接 WARP,我们可以将所有出站流量(包括代理服务器转发给目标服务器的流量)通过 Cloudflare 的优化网络进行传输。这不仅可以改善连接质量,提高访问速度,还能在一定程度上增加流量的隐私性和抗封锁能力。WARP 客户端在 Linux 系统中通常依赖于 D-Bus 消息总线进行通信和配置管理,这为在容器化环境中部署 WARP 带来了额外的复杂性,因为容器默认可能不包含或正确配置 D-Bus 服务。这一挑战将在后续章节中,当我们讨论使用 supervisord 管理多进程容器时,得到详细的解决。
综上所述,ClawCloud Run 平台的这些限制——无特权环境、无 IPv6 支持、域名绑定问题以及潜在的网络不稳定性——共同构成了一个非理想的部署场景。然而,通过组合使用 Docker 容器化、明确禁用 IPv6、Cloudflare Tunnel 以及 Cloudflare WARP,我们构建了一个多层次、协同工作的解决方案。这个方案不仅有效地克服了各个单一的限制,而且各个组件之间相互补充,共同提升整个系统的健壮性、安全性和可访问性。这种将挑战转化为采用更先进、更灵活技术组合的契机,正是现代云原生工程实践的精髓所在。接下来的章节将详细阐述如何将这些策略转化为具体的、可执行的部署步骤和配置。
构建稳固基石:Docker 容器化部署与服务管理优化
在深入探讨具体的部署细节之前,我们必须首先确立一个核心的构建原则:在任何云环境中,尤其是像 ClawCloud Run 这样存在诸多限制的平台,一个精心设计的容器化策略是确保应用成功部署和稳定运行的关键。Docker 容器化不仅为应用提供了封装和隔离,更重要的是,它为我们提供了一种应对平台限制、标准化部署流程、以及有效管理复杂依赖关系的强大工具。本章节将聚焦于如何通过优化 Docker 镜像构建、引入 `supervisord` 作为进程管理器、以及精心设计 `docker-compose.yml` 配置,来为 3X-UI、Cloudflare WARP 以及后续的 Cloudflare Tunnel 集成,打造一个稳固、可靠且易于维护的运行基石。我们将从 Dockerfile 的设计哲学讲起,逐步过渡到在单一容器内高效管理多个协同服务的复杂议题,这是本次部署方案的核心技术亮点之一。Dockerfile 的演进:从基础到集成
最初的 Dockerfile 设计思路,如思考助手提供的第一个版本,是基于 Alpine Linux 的。Alpine 以其小巧的体积和安全性而闻名,是构建轻量级镜像的流行选择。该版本包含了构建 3X-UI 所需的步骤,如安装 Go 语言环境、编译源码、复制可执行文件,并尝试在 Alpine 环境中安装 Cloudflare WARP。然而,Alpine Linux 使用 `musl libc` 作为其标准 C 库,这与更常见的 `glibc` 在某些二进制兼容性上可能存在差异。更重要的是,Cloudflare WARP 客户端(特别是 `warp-svc` 服务)在某些情况下可能对 `glibc` 有更强的依赖,或者其官方提供的安装包主要是针对 Debian/Ubuntu 等 `glibc` 系发行版。因此,在 Alpine 上安装和运行 WARP 可能会遇到额外的挑战,例如需要手动解决依赖关系,或者某些功能可能无法正常工作。此外,原始 Dockerfile 中使用 `CMD [ "/bin/sh", "-c", "warp-cli --accept-tos register && warp-cli --accept-tos connect && ./x-ui" ]` 来启动服务,这种方式虽然简单,但存在显著的缺陷:它将 WARP 的配置连接与 3X-UI 的启动串联在了一个 shell 命令中。如果 `warp-cli connect` 命令阻塞或失败,或者后续需要管理多个独立的后台服务,这种简单的 shell 命令链将显得力不从心,难以进行细粒度的控制和故障恢复。鉴于这些考虑,一个更优化的方案,如思考助手在后续改进中提出的,是采用 Ubuntu 22.04 作为基础镜像。Ubuntu 拥有庞大的软件仓库和社区支持,其 glibc 环境与大多数主流软件(包括 Cloudflare WARP)具有更好的兼容性。使用 Ubuntu 可以显著简化 WARP 的安装过程,通常只需按照 Cloudflare 官方文档添加软件源并使用 apt-get install 即可,这大大降低了因环境问题导致部署失败的风险。从 Alpine 迁移到 Ubuntu,虽然会使最终镜像的体积有所增加,但在这种需要运行多个复杂系统级服务的场景下,兼容性和部署的便利性往往比极致的镜像体积压缩更为重要。这种权衡体现了在实践中,"最佳"选择往往是根据具体应用场景和需求而定的,而非盲目追求单一指标。
进程管理之殇:`supervisord` 的引入与价值
在容器化部署的实践中,一个被广泛推崇的最佳实践是"一个容器一个主进程" [[40](https://docs.docker.com/engine/containers/multi-service_container)] [[42](https://forums.docker.com/t/best-practices-multiple-app-in-containers/913)]。这种模式有助于保持容器的简洁性,并利用 Docker 自身的进程管理机制(如自动重启容器)来保证服务的可用性。然而,当我们需要在单个容器内运行多个相互依赖或需要独立管理的后台服务时(例如,本方案中的 D-Bus、Cloudflare WARP 服务 `warp-svc`、以及 3X-UI 本身),这个最佳实践就面临了挑战。简单地使用 shell 脚本后台启动多个进程,难以有效地监控它们的状态、处理进程的意外退出、以及管理它们之间的启动顺序和依赖关系。这正是进程管理器(Process Manager)大显身手的地方。`supervisord` 是一个用 Python 编写的客户端/服务器系统,它允许用户在类 UNIX 操作系统上监控和控制多个进程 [[44](https://dev.to/pratapkute/multiple-services-in-a-docker-with-supervisord-2g13)]。它提供了一套统一的机制来启动、停止、重启进程,并可以配置进程的自动重启、日志轮转等,非常适合在 Docker 容器中管理多个服务 [[47](https://medium.com/@patricia.calestino.rodrigues/why-use-supervisord-and-supervisorctl-to-run-multiple-processes-via-docker-file-df02d613f030)]。在我们的部署方案中,`supervisord` 扮演着至关重要的角色,它解决了在单一容器内协调多个复杂服务的难题,确保了整个系统的稳定性和可维护性。`supervisord.conf` 的精雕细琢:服务编排与依赖管理
`supervisord` 的核心在于其配置文件(通常是 `supervisord.conf`),它定义了所有需要被管理的程序(programs)及其行为。一个精心设计的 `supervisord.conf` 能够清晰地展现服务间的依赖关系和启动顺序,确保系统按预期初始化。以下是一个针对我们场景的 `supervisord.conf` 示例及其关键配置的深度解析:[unix_http_server]
file=/tmp/supervisor.sock
chmod=0700
[supervisord]
logfile=/var/log/supervisor/supervisord.log,/dev/stdout
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/var/run/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
user=root
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
# 优先级 100:先启动 dbus(WARP 的依赖)
[program:dbus]
command=/usr/bin/dbus-daemon --config-file=/usr/share/dbus-1/system.conf --system
autostart=true
autorestart=true
startsecs=0
startretries=0
stdout_logfile=/var/log/supervisor/dbus.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/dbus.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=100
# 优先级 200:然后启动 WARP 服务
[program:warp-svc]
command=/bin/warp-svc --accept-tos
autostart=true
autorestart=true
startsecs=5
startretries=10
stdout_logfile=/var/log/supervisor/warp-svc.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/warp-svc.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=200
environment=HOME="/root",USER="root"
# 优先级 250:WARP 初始化配置 (一次性任务)
[program:warp-init]
command=/app/init-warp.sh
autostart=true
autorestart=false
startsecs=10
startretries=3
stdout_logfile=/var/log/supervisor/warp-init.log,/dev/stdout
stderr_logfile=/var/log/supervisor/warp-init.err.log,/dev/stderr
priority=250
depends_on=dbus,warp-svc
# 优先级 300:最后启动 x-ui(确保依赖服务已就绪)
[program:x-ui]
command=/x-ui/x-ui-linux-amd64/x-ui/x-ui
environment=XRAY_VMESS_AEAD_FORCED="false",XUI_ENABLE_FAIL2BAN="false"
directory=/x-ui/x-ui-linux-amd64/x-ui
autostart=true
autorestart=true
startsecs=5
startretries=10
stdout_logfile=/var/log/supervisor/x-ui.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/x-ui.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=300
配置深度解析:
[supervisord]** 节**:logfile=/var/log/supervisor/supervisord.log,/dev/stdout: 将supervisord自身的日志同时写入文件和标准输出。写入标准输出对于 Docker 容器至关重要,因为它允许 Docker 的日志驱动(如json-file)捕获并管理这些日志,便于通过docker logs命令查看。nodaemon=false: 这是关键设置。它告诉supervisord作为守护进程运行。在 Docker 容器中,我们通常将supervisord作为容器的主进程(PID 1)。如果设置为true(非守护模式),supervisord在前台运行,这也可以工作,但守护进程模式更符合其设计初衷。只要确保supervisord是容器启动时执行的最后一个命令,它就能正确地管理子进程并接收信号。user=root: 指定supervisord及其管理的程序默认以 root 用户身份运行。在我们的场景中,由于需要启动系统服务(如 dbus)和可能需要网络权限的 WARP,使用 root 是必要的。如果安全策略允许,且应用支持,可以考虑在特定程序配置中使用user指令降权运行。
[program:dbus]** 节**:command=/usr/bin/dbus-daemon --config-file=/usr/share/dbus-1/system.conf --system: 启动 D-Bus 系统守护进程。Cloudflare WARP 客户端 (warp-svc) 依赖 D-Bus 进行进程间通信和配置管理。因此,D-Bus 必须在 WARP 之前启动并正常运行。priority=100:supervisord按照优先级(数字越小优先级越高)的顺序启动程序。将 D-Bus 的优先级设置为最低的 100,确保它是第一个被启动的关键服务。startsecs=0,startretries=0:startsecs表示进程启动后需要保持运行多少秒才被认为是成功启动。对于 D-Bus 这样的系统服务,如果启动命令本身不阻塞,可以设置为 0。startretries=0表示如果启动失败,不进行重试,因为通常如果 D-Bus 无法启动,说明系统环境存在根本性问题,重试也无济于事。
[program:warp-svc]** 节**:command=/bin/warp-svc --accept-tos: 启动 Cloudflare WARP 的后台服务。--accept-tos参数用于自动接受服务条款,这在自动化部署中是必要的。priority=200: 优先级高于 D-Bus,确保在 D-Bus 启动后再启动 WARP 服务。startsecs=5: 给予 WARP 服务 5 秒的启动时间。如果进程在 5 秒后仍在运行,则认为启动成功。startretries=10: 如果启动失败,最多重试 10 次,增加了服务的韧性。environment=HOME="/root",USER="root": 为warp-svc进程设置必要的环境变量。某些版本的 WARP 可能需要这些变量来正确找到其配置文件或数据目录。
[program:warp-init]** 节 (WARP 初始化脚本)**:- 这是一个一次性任务,用于在
warp-svc启动后,执行 WARP 的注册和连接操作。 command=/app/init-warp.sh: 指向一个自定义的 shell 脚本,其内容可能如下:
- 这是一个一次性任务,用于在
#!/bin/bash
echo "开始初始化 Cloudflare WARP..."
# 等待 dbus 和 warp-svc 完全启动
sleep 5
# 注册 WARP
warp-cli --accept-tos registration new
# 连接 WARP
warp-cli --accept-tos connect
# 检查连接状态
for i in {1..30}; do
if warp-cli --accept-tos status | grep -q "Connected"; then
echo "WARP 连接成功!"
exit 0
fi
echo "等待 WARP 连接... ($i/30)"
sleep 2
done
echo "WARP 连接超时或失败!"
exit 1
- `autostart=true`, `autorestart=false`: `autostart` 确保脚本会被执行。`autorestart=false` 非常重要,因为这是一个一次性初始化脚本,执行成功或失败后都不应被 `supervisord` 自动重启。
- `priority=250`: 在 `warp-svc` 之后执行。
- `depends_on=dbus,warp-svc`: `supervisord` 的一个强大功能。它明确指定了 `warp-init` 程序依赖于 `dbus` 和 `warp-svc` 程序的 RUNNING 状态。只有当这两个依赖服务都成功启动后,`warp-init` 才会被执行。这确保了初始化操作在正确的时机进行。
[program:x-ui]** 节**:command=/app/x-ui: 启动 3X-UI 主程序。environment=XRAY_VMESS_AEAD_FORCED="false",XUI_ENABLE_FAIL2BAN="false": 为 3X-UI 设置环境变量。XUI_ENABLE_FAIL2BAN="false"在受限环境中是一个有用的设置,因为 fail2ban 可能需要额外的系统权限和配置。priority=300: 最高优先级,确保在所有依赖服务(D-Bus, WARP 服务,WARP 初始化)都就绪后,最后启动 3X-UI。startsecs=5,startretries=10: 与 WARP 服务类似的配置,给予 3X-UI 充足的启动时间和重试机会。
通过这样细致的 supervisord.conf 配置,我们不仅实现了在单个容器内运行多个复杂服务,更重要的是,我们建立了一个清晰的启动顺序和依赖关系图。supervisord 会按照 priority 从低到高的顺序启动程序,并且只有当一个程序的 depends_on 列表中的所有程序都处于 RUNNING 状态时,它才会被启动。这种机制极大地提高了整个系统启动的可靠性和可预测性。如果某个服务(如 D-Bus)启动失败,supervisord 会记录错误,并且所有依赖于它的服务(WARP、3X-UI)都不会被启动,避免了系统进入一个不确定或部分可用的状态。
`docker-compose.yml` 的协同作用
虽然 `supervisord` 负责容器内部的进程管理,但 `docker-compose.yml` 文件则在更高层次上定义了容器的行为、资源限制、网络配置以及容器间的通信(如果涉及多个容器)。以下是一个针对我们单容器部署方案的 `docker-compose.yml` 示例:version: '3.8'
services:
3x-ui-app:
build:
context: .
dockerfile: Dockerfile # 指向包含 supervisord 的 Dockerfile
container_name: 3xui_app
volumes:
- ./db/:/etc/x-ui/ # 持久化 3X-UI 的数据库和配置
- ./cert/:/root/cert/ # 持久化 SSL 证书 (如果自签名或特定需要)
- ./supervisor_logs/:/var/log/supervisor/ # 可选:持久化 supervisord 管理的服务的日志
environment:
- TZ=Asia/Shanghai
# XRAY_VMESS_AEAD_FORCED 和 XUI_ENABLE_FAIL2BAN 也可以在这里设置,或在 supervisord.conf 的 x-ui program 中
ports:
- "2053:2053" # 将主机的 2053 端口映射到容器的 2053 端口 (3X-UI Web 面板)
cap_add:
- NET_ADMIN # 可能需要用于 Xray 核心的某些网络操作
- SYS_ADMIN # WARP 或某些系统级操作可能需要
security_opt:
- seccomp:unconfined # 在某些严格的安全配置下可能需要,但应谨慎评估风险
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:2053"] # 检查 3X-UI Web 面板是否可访问
interval: 30s
timeout: 10s
retries: 3
start_period: 60s # 给 supervisord 和所有服务足够的时间启动
# Cloudflare Tunnel 可以作为另一个独立服务运行,也可以在主机上通过 systemd 等管理
# 如果作为容器运行,并且需要与 3x-ui-app 容器通信,则需要配置 Docker 网络
# 此处示例为独立容器,假设 tunnel 通过 localhost 访问 3x-ui (需要 host network 或其他配置)
cloudflare-tunnel:
image: cloudflare/cloudflared:latest
container_name: cf-tunnel
command: tunnel --config /etc/cloudflared/config.yml run
volumes:
- ./cloudflared/:/etc/cloudflared/ # 包含 tunnel-id.json 和 config.yml
restart: unless-stopped
# depends_on:
# - 3x-ui-app # 如果 tunnel 需要等待 3x-ui 完全启动,可以使用 depends_on
# 但 cloudflared 本身有重连机制,所以不是必须的
docker-compose.yml** 关键配置解析:**
build: 指定 Docker 镜像的构建上下文和 Dockerfile 路径。volumes: 数据持久化的核心。./db/:/etc/x-ui/: 将 3X-UI 的配置文件和数据库(通常位于/etc/x-ui/目录)挂载到主机的./db目录。这样即使容器被销毁和重建,3X-UI 的所有设置和用户数据也不会丢失。./cert/:/root/cert/: 如果需要使用自定义的 SSL 证书(例如,如果 3X-UI 的 Web 面板需要 HTTPS,或者某些代理协议需要证书),可以将其挂载到这里。./supervisor_logs/:/var/log/supervisor/: 可选的,但强烈建议用于调试。将supervisord及其管理的所有服务的日志持久化到主机,方便在容器出现问题后进行详细分析。
ports: 将容器的端口暴露给主机。这里我们使用 2053 端口,这是一个相对不常见的端口,有时可以避免一些基础网络环境对常见端口的限制或干扰。cap_add: 向容器添加 Linux capabilities。NET_ADMIN: 允许执行网络管理任务,如配置防火墙规则、创建网络隧道等。Xray 核心在某些配置下可能需要此 capability。SYS_ADMIN: 允许执行广泛的系统管理操作。Cloudflare WARP 可能需要此 capability 来正确安装和运行,尤其是在需要修改系统网络配置或与内核模块交互时。添加 capabilities 比使用--privileged更安全,但仍需谨慎,只授予必要的权限。
security_opt: 安全选项。seccomp:unconfined: 禁用默认的 seccomp 过滤器。Seccomp (secure computing mode) 是一种 Linux 内核特性,用于限制进程可以执行的系统调用。在某些情况下,WARP 或 Xray 可能需要一些被默认 seccomp 配置文件阻止的系统调用。将此设置为unconfined会移除这些限制,但会降低安全性。这应作为最后的手段,并仔细评估风险。 如果可能,应尝试创建自定义的、更宽松的 seccomp 配置文件。
restart: unless-stopped: 确保容器在退出或重启后会自动重新启动,除非手动停止。healthcheck: 定义容器的健康检查。Docker 会定期执行test中的命令。如果命令连续失败retries次,容器将被标记为unhealthy。这对于监控服务的真实可用性非常有用。start_period: 60s: 在容器启动后的最初 60 秒内,健康检查失败不会计入重试次数。这给了supervisord及其管理的所有服务(D-Bus, WARP, 3X-UI)充足的启动时间,避免了因启动耗时较长而导致的误报。
至此,我们已经构建了一个坚实的基础。通过一个基于 Ubuntu 的、集成了 supervisord 的 Docker 镜像,以及一个精心配置的 docker-compose.yml 文件,我们成功地将 3X-UI、Cloudflare WARP 及其依赖的 D-Bus 服务封装在一个统一的管理单元中。supervisord 确保了这些服务在容器内部的正确启动顺序、依赖关系和生命周期管理。这个自包含的、高度自动化的单元,为下一阶段集成 Cloudflare Tunnel,从而实现完整的、可从公网访问的代理服务铺平了道路。这种对细节的关注和对工具的深度整合,是应对复杂部署挑战并实现长期稳定运行的关键。
穿透迷雾:Cloudflare WARP 与 Tunnel 的协同赋能
在成功构建了一个内部稳定运行的、集成了 3X-UI 和 Cloudflare WARP 的 Docker 容器之后,接下来的关键步骤是如何将这个内部服务安全、可靠地暴露到公共互联网,并确保其出站流量同样经过优化和保护。这正是 Cloudflare 家族的另外两个强大工具——Cloudflare WARP 和 Cloudflare Tunnel——协同发挥作用的地方。它们共同构成了我们解决方案的"网络层",负责处理所有入站和出站的流量,为在受限的 ClawCloud Run 环境中运行的 3X-UI 服务提供了强大的网络能力和安全保障。Cloudflare WARP 主要负责优化和保护出站连接,确保容器能够稳定地访问外部资源;而 Cloudflare Tunnel 则巧妙地解决了入站访问的问题,绕过了平台自身的域名绑定限制,并为服务提供了额外的安全层。理解这两者如何独立工作以及如何相互补充,对于掌握整个部署方案的精髓至关重要。Cloudflare WARP:构筑稳固的出站连接
Cloudflare WARP 本质上是一个为个人设备设计的轻量级 VPN 客户端,它通过将用户的流量通过安全的 WireGuard 或 MASQUE 隧道路由到 Cloudflare 遍布全球的边缘网络,从而提升用户的网络隐私、安全性和性能 [[26](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/warp)]。在我们的部署方案中,WARP 被赋予了新的角色:作为运行在 ClawCloud Run 容器内的 3X-UI 代理服务的出站网关。这意味着,所有由 3X-UI 转发的客户端流量,在离开 ClawCloud Run 的网络并前往最终目的地之前,都会先经过 Cloudflare WARP 的处理。这种设计带来了多重显著优势。首先,**连接稳定性与性能提升**。ClawCloud Run 作为云平台,其出站网络质量可能受到多种因素影响,包括路由优化、国际带宽限制等。通过 WARP,这些流量被引导至 Cloudflare 拥有高度优化的全球骨干网络,这通常意味着更低的延迟、更高的吞吐量和更稳定的连接,尤其是在访问国际资源时表现更为突出。其次,**增强的隐私与抗封锁能力**。WARP 的出站流量源自 Cloudflare 的 IP 地址池,这在一定程度上隐藏了 ClawCloud Run 服务器的真实源 IP,为代理服务器本身提供了一层额外的隐私保护。同时,由于 Cloudflare 网络的规模和声誉,其 IP 地址被封锁的可能性相对较低,从而提高了代理服务的可用性。第三,**简化的网络配置**。在某些复杂的网络环境中,可能需要手动配置代理或路由规则才能确保出站流量正常。WARP 通过在操作系统层面创建一个虚拟网络接口,并自动配置路由,使得所有出站流量(或特定流量)默认通过其隧道,简化了容器内的网络管理。然而,在 Docker 容器中集成 Cloudflare WARP 并非没有挑战。如前所述,WARP 客户端(特别是 warp-svc 后台服务)在 Linux 系统上通常依赖于 D-Bus (Desktop Bus) 进行系统级的通信、状态管理和策略配置。D-Bus 是一个消息总线系统,允许应用程序之间相互通信和交换信息。在标准的 Linux 发行版中,D-Bus 通常作为系统服务自动运行。但在一个最小化的 Docker 容器中,D-Bus 服务默认是不存在的。因此,要成功运行 WARP,我们必须首先在容器内启动一个 D-Bus 守护进程。这正是我们在前一章节中通过 supervisord 配置 [program:dbus] 来实现的。supervisord 确保 D-Bus (dbus-daemon --system) 优先于 WARP 服务 (warp-svc) 启动,从而满足了 WARP 的运行时依赖。在 warp-svc 成功启动后,还需要执行 warp-cli register 和 warp-cli connect 命令来激活 WARP 连接。这些操作通常只需要在首次设置或重置时执行一次。我们通过一个独立的 warp-init 脚本和对应的 [program:warp-init] 配置来处理这些一次性任务,并通过 depends_on 确保它在 warp-svc 成功运行后执行。这种通过 supervisord 精心编排的多进程启动序列,是克服容器化环境中 WARP 部署复杂性的关键。一旦 WARP 连接成功,容器内的所有出站 IP 流量(除了发往 Cloudflare 边缘以维持 WARP 隧道本身的流量)都将通过 WARP 隧道进行路由。这意味着 3X-UI 的 Xray 核心在转发用户数据时,其源 IP 地址将是 Cloudflare 的某个出口 IP,而不是 ClawCloud Run 服务器的 IP。这种透明代理的特性使得 3X-UI 本身无需进行特殊的出站配置即可受益于 WARP,当然,如果需要更精细的流量控制(例如,部分流量走 WARP,部分直连),则需要在 Xray 的出站规则中进行相应配置。
Cloudflare Tunnel:安全、无感地暴露入站服务
解决了出站连接的优化问题后,我们面临的下一个挑战是如何让用户能够从公网访问到运行在 ClawCloud Run 容器内的 3X-UI Web 管理面板,以及可能由 3X-UI 提供的代理服务(如果代理协议本身也需要通过一个公网域名访问的话)。传统的做法是在 ClawCloud Run 的控制面板中配置域名解析,将一个 A 记录指向服务器的公网 IP,然后在服务器上配置 Nginx 等反向代理,将域名的流量转发到本地运行的 3X-UI 服务。然而,正如前言中提到的,ClawCloud Run 的域名绑定服务可能存在延迟或无响应的问题,这使得传统方法变得不可靠。Cloudflare Tunnel 提供了一个革命性的替代方案。它是一个出站-only (outbound-only) 的反向代理隧道 [[21](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/cloudflared)]。这意味着你不需要在服务器上开放任何入站端口,也不需要配置公网 IP。取而代之的是,你在服务器上运行一个名为 `cloudflared` 的轻量级客户端,它会主动建立一个从你的服务器到 Cloudflare 边缘网络的加密、持久性出站连接。然后,你只需在 Cloudflare 的 DNS 控制面板中,将你的域名(例如 `3x-ui.yourdomain.com`)的 CNAME 记录指向 Cloudflare 为你生成的隧道端点。当用户访问 `https://3x-ui.yourdomain.com` 时,请求首先到达 Cloudflare 的边缘服务器,然后通过已经建立好的安全隧道,被转发到本地服务器上 `cloudflared` 客户端指定的本地服务(例如 `http://localhost:2053`,即 3X-UI 的 Web 面板)。部署 Cloudflare Tunnel 通常包括以下步骤,这些可以在本地开发机器或任何能访问互联网的地方完成,然后将生成的配置文件和凭据上传到 ClawCloud Run 服务器:
- **安装 **
cloudflared: 根据服务器操作系统下载并安装cloudflared客户端。 - 认证: 执行
cloudflared tunnel login,这会打开一个浏览器窗口,让你授权cloudflared访问你的 Cloudflare 域名。 - 创建隧道: 执行
cloudflared tunnel create <tunnel-name>(例如cloudflared tunnel create 3x-ui-tunnel)。Cloudflare 会生成一个唯一的隧道 ID 和一个 JSON 格式的凭据文件。这个凭据文件非常重要,需要妥善保管,后续cloudflared客户端需要用它来认证并连接到这个隧道。 - 配置隧道: 创建一个 YAML 配置文件(例如
~/.cloudflared/config.yml),定义隧道如何将传入的流量路由到本地服务。一个基本的配置如下:
tunnel: <your-tunnel-uuid> # 从上一步获取的隧道 UUID
credentials-file: /path/to/your/tunnel-credentials.json # 凭据文件的路径
ingress:
- hostname: 3x-ui.yourdomain.com
service: http://localhost:2053 # 将流量转发到本地 2053 端口的 3X-UI Web 面板
- service: http_status:404 # 其他所有请求返回 404
这个配置告诉 Cloudflare Tunnel,所有发往 3x-ui.yourdomain.com 的流量都应该被代理到本地主机的 2053 端口。
- 创建 DNS 记录: 执行
cloudflared tunnel route dns <tunnel-name> <hostname>(例如cloudflared tunnel route dns 3x-ui-tunnel 3x-ui.yourdomain.com)。这会自动在你的 Cloudflare DNS 区域中创建一个 CNAME 记录,将你的域名指向隧道。 - 运行隧道: 最后,在服务器上执行
cloudflared tunnel --config /path/to/config.yml run来启动隧道客户端。
在我们的 docker-compose.yml 示例中,Cloudflare Tunnel 被设计为一个独立的 Docker 服务 (cloudflare-tunnel)。这样做的好处是关注点分离:3X-UI 及其 WARP 出站在一个容器中管理,而 Tunnel 的入站代理则在另一个容器中运行。它们通过 Docker 的默认网络进行通信,或者,如果配置了 host 网络模式(在 3x-ui-app 服务中),则 cloudflared 可以直接通过 localhost 访问 3X-UI。cloudflared 容器通过卷挂载的方式访问包含隧道凭据和配置的 ./cloudflared/ 目录。restart: unless-stopped 策略确保隧道服务的高可用性。
Cloudflare Tunnel 的优势是多方面的。它完美地绕过了 ClawCloud Run 平台可能存在的域名绑定问题,因为我们不再依赖平台来处理 DNS 或端口转发。它极大地增强了安全性,因为服务器上不需要开放任何入站端口,所有入站流量都经过 Cloudflare 边缘的过滤和 WAF(Web 应用防火墙)保护,并且隧道本身是加密的。它还简化了 SSL/TLS 证书的管理,你可以在 Cloudflare 的控制面板中为你的域名启用 Cloudflare 提供的免费通配符证书,Cloudflare 会在其边缘服务器上终止 SSL 连接,然后以 HTTP 的形式将流量转发给你的本地服务(或者你也可以配置端到端的加密)。
WARP 与 Tunnel 的协同效应
将 Cloudflare WARP 和 Cloudflare Tunnel 结合起来,形成了一个强大的网络架构,为在 ClawCloud Run 上部署的 3X-UI 服务提供了全方位的网络优化和保护。WARP 负责处理所有"出站"的流量,确保 3X-UI 代理服务器在访问外部世界时拥有稳定、快速且具有一定隐私保护的连接。Tunnel 则负责处理所有"入站"的流量,为用户提供一个安全、可靠的方式来访问 3X-UI 的管理面板,而无需关心 ClawCloud Run 平台的网络限制。这两者共同工作,使得 3X-UI 服务仿佛置身于一个由 Cloudflare 全球网络精心呵护的"温室"之中:对外,它通过 WARP 拥有一个优质的出口;对内,它通过 Tunnel 拥有一个安全的入口。这种架构不仅解决了特定平台的限制,更是一种通用的、具有高度弹性和安全性的服务部署模式。它将复杂的网络配置和管理抽象化,让开发者可以更专注于应用本身的功能,而不是底层的网络难题。通过这种深度的集成与协同,我们成功地将一个在受限环境中本难以部署和管理的复杂应用,转变为一个稳定、安全且易于访问的云原生服务。精益求精:3X-UI 面板配置、运维与故障排除
在完成了底层基础设施的构建——即集成了 3X-UI、Cloudflare WARP 并通过 `supervisord` 进行管理的 Docker 容器,以及负责入站流量的 Cloudflare Tunnel——之后,我们的工作重心转向了应用层的配置、长期的运营维护以及 inevitable 的故障排除。3X-UI 本身提供了一个功能丰富的 Web 界面,允许我们对代理协议、端口、用户等进行精细化管理。然而,要使其在我们这个特殊的、由多层技术栈支撑的环境中达到最佳性能和稳定性,仍需进行一些针对性的配置。同时,一个完善的部署方案必须包含有效的监控、日志管理和故障应对策略,以确保服务的长期可用性。本章节将深入探讨 3X-UI 面板的关键配置项,提供一套实用的运维监控指南,并针对在此复杂部署中可能遇到的常见问题,给出具有洞察力的分析和解决方案。3X-UI 面板的针对性配置
一旦整个系统通过 `docker-compose up -d` 成功启动,并且 Cloudflare Tunnel 也正常运行,你就可以通过之前配置的域名(例如 `https://3x-ui.yourdomain.com`)访问 3X-UI 的 Web 管理面板了。首次登录后,进行一些针对性的配置是至关重要的,以确保其与我们的部署环境(无 IPv6、WARP 出站)和选定的端口(2053)相协调。- 入站设置 (Inbound Settings):
- 端口选择: 在创建入站规则时,确保监听端口与我们在
docker-compose.yml中映射的端口一致,即 2053。虽然 3X-UI 本身可以监听任意端口,但只有通过 Docker 映射出来的端口才能从容器外部(包括通过 Cloudflare Tunnel)访问到。选择 2053 这样的非标准端口,有时可以规避一些网络环境中对常见代理端口(如 80, 443, 8080 等)的封锁或干扰。 - 协议选择: 3X-UI 支持多种协议,包括 VLESS、VMess、Trojan、Shadowsocks 等。考虑到我们运行在一个纯 IPv4 环境中,应确保所选择的协议及其配置不依赖 IPv6。VLESS 和 Trojan 是相对较新且性能较好的协议,它们通常能很好地与 WARP 出站配合使用。
- 禁用 IPv6: 在入站规则的详细配置中,如果存在与 IPv6 相关的选项(例如 "listen on IPv6" 或 "allow IPv6 clients"),应确保将其禁用。这可以避免 3X-UI 尝试在不支持 IPv6 的环境中处理 IPv6 流量,从而减少潜在的错误和资源浪费。
- 传输配置 (Transport Settings): 对于需要绕过网络审查或复杂防火墙的场景,可以配置不同的传输层,如 WebSocket (ws)、gRPC、TCP 等。WebSocket 和 gRPC 由于其流量特征与普通 HTTPS 流量相似,通常具有更好的伪装性。如果选择这些传输方式,并且希望通过 Cloudflare Tunnel 来承载这些流量,Cloudflare 通常能很好地代理它们。
- 端口选择: 在创建入站规则时,确保监听端口与我们在
- WARP 集成 (面板内配置):
- 虽然我们已经通过
supervisord在系统级别启动了 Cloudflare WARP,并为所有出站流量提供了代理,但 3X-UI 面板本身可能也提供了一些与 WARP 集成的选项。这些选项可能允许你指定 WARP 的许可证密钥(如果你有付费版本以获得更多功能或带宽),或者在面板内查看 WARP 的连接状态。如果面板提供了此类功能,可以根据需要进行配置。但核心的 WARP 连接是由我们底层的warp-svc和warp-cli保证的,这部分不依赖于面板的配置。
- 虽然我们已经通过
- 出站设置 (Outbound Settings):
- 默认出站: 在 3X-UI 的出站规则配置中,可以设置一个默认的出站代理。由于我们已经通过系统级的 WARP 实现了全局出站代理,这里的"默认出站"可以设置为 "direct"(直连)。因为所有流量在离开容器时,已经由 WARP 接管。
- 高级路由与分流: 3X-UI 的强大之处在于其灵活的路由功能。你可以创建自定义的出站规则,并根据域名、IP、GeoIP 等条件将流量分流到不同的出站代理。例如,你可以配置国内网站直连,而国外网站则通过一个特定的代理(可以是另一个 3X-UI 入站,或者通过 WARP)。在我们的部署中,由于 WARP 已经是全局出站,如果需要更精细的控制(例如,部分流量不经过 WARP),则需要在 Xray 核心的配置中仔细规划出站规则,并可能需要调整 WARP 的配置(例如,使用 WARP 的代理模式而非 VPN 模式,或者配置 split tunneling)。然而,对于大多数用例,全局 WARP 出站已经能提供良好的性能和隐私保护。
运维监控与日志管理
一个健壮的系统离不开有效的监控和日志管理。对于我们的部署方案,监控可以从多个层面进行:- 容器级别监控:
docker ps -a: 查看所有容器的状态,确保3xui_app和cf-tunnel都处于Up状态。docker logs <container_name/id>: 查看容器的标准输出/错误日志。对于3xui_app容器,这将显示supervisord自身的日志以及所有被supervisord管理的服务(如果它们的日志被重定向到 stdout/stderr,如我们的supervisord.conf配置所示)的日志输出。docker-compose logs: 在docker-compose.yml文件所在目录执行此命令,可以查看由docker-compose管理的所有服务的日志。加上-f参数可以实时跟踪日志输出。
supervisord** 内部服务监控**:- 进入
3xui_app容器:docker exec -it 3xui_app bash。 - 在容器内,使用
supervisorctl status命令可以查看由supervisord管理的每个程序(dbus,warp-svc,warp-init,x-ui)的详细运行状态,如RUNNING,STARTING,STOPPED,EXITED,FATAL等。这是诊断容器内部服务问题的核心工具。 supervisorctl tail <program_name>: 可以实时查看特定服务的标准输出日志。例如,supervisorctl tail x-ui。supervisorctl restart <program_name>: 可以单独重启某个服务,而无需重启整个容器。这对于快速恢复某个故障服务非常有用。
- 进入
- Cloudflare WARP 状态检查:
- 在
3xui_app容器内,执行warp-cli --accept-tos status可以查看 WARP 的详细连接状态,包括是否已连接、连接类型(WireGuard/MASQUE)、获取的 IP 地址等信息。
- 在
- Cloudflare Tunnel 状态检查:
- 查看
cf-tunnel容器的日志:docker logs cf-tunnel。 - 在 Cloudflare 的仪表盘中,Zero Trust -> Networks -> Tunnels 页面,可以查看隧道的连接状态、健康检查以及流量统计。
- 查看
- 日志轮转与持久化:
- 如前所述,在
docker-compose.yml中为容器配置了日志轮转选项(如max-size,max-file),可以防止单个日志文件无限增长占用过多磁盘空间。 - 将
supervisord管理的服务的日志(通过卷挂载./supervisor_logs/:/var/log/supervisor/)持久化到主机,是一个极佳的实践。这样即使容器被删除,这些历史日志依然保留,便于进行事后分析或故障排查。
- 如前所述,在
常见问题与深度解决方案
尽管我们进行了精心的设计,但在实际运行中仍可能遇到各种问题。以下是一些常见问题及其可能的解决方案:- 域名绑定/访问问题 (Cloudflare Tunnel 相关):
- 现象: 无法通过配置的域名访问 3X-UI 面板。
- 排查:
- 检查
cf-tunnel容器是否正常运行 (docker ps)。 - 查看
cf-tunnel容器的日志 (docker logs cf-tunnel),寻找连接错误或配置错误信息。 - 确认 Cloudflare DNS 中的 CNAME 记录是否正确指向了隧道。
- 在 Cloudflare 仪表盘的 Tunnel 设置中,检查隧道是否显示为 "Healthy"。
- 确认
config.yml中的hostname和service配置是否正确。service应指向3xui_app容器内的 3X-UI 服务地址。如果两个容器在同一个默认 Docker 网络中,这应该是http://3x-ui-app:2053(使用 Docker 服务名作为主机名)。如果3xui_app使用了network_mode: host,则应该是http://localhost:2053。 - 检查 Cloudflare 的 SSL/TLS 加密模式(在 DNS/SSL/TLS -> Overview 中),如果设置为 "Full (Strict)",则你的本地服务也需要提供有效的 SSL 证书。对于初学者,"Flexible" (Cloudflare 到访问者是 HTTPS,Cloudflare 到你的服务器是 HTTP) 或 "Full" (Cloudflare 到你的服务器是 HTTPS,但不验证证书) 可能更容易配置。
- 检查
- WARP 连接失败:
- 现象:
supervisorctl status显示warp-svc或warp-init为FATAL或EXITED状态;或在容器内执行warp-cli status显示未连接。 - 排查:
- 使用
supervisorctl tail warp-svc和supervisorctl tail warp-init查看详细的错误日志。 - 确认
dbus服务是否正常运行 (supervisorctl status dbus)。WARP 依赖 D-Bus。 - 检查
warp-init脚本的输出,看warp-cli register和warp-cli connect是否成功。 - 尝试在容器内手动重置 WARP:
warp-cli --accept-tos delete然后warp-cli --accept-tos register和warp-cli --accept-tos connect。 - 检查容器的 capabilities 和 security options,确保 WARP 有足够的权限进行网络操作。
- 使用
- 现象:
- 3X-UI 服务异常:
- 现象: 无法访问 3X-UI 面板,或代理连接不工作。
- 排查:
- 使用
supervisorctl status x-ui检查 3X-UI 进程状态。 - 使用
supervisorctl tail x-ui查看 3X-UI 的日志,寻找启动错误或运行时错误。 - 确认端口映射正确,并且没有其他进程占用容器内的 2053 端口。
- 检查 3X-UI 的入站规则配置是否正确,特别是协议、端口和传输设置。
- 如果修改了
supervisord.conf或x-ui的环境变量,需要重启3xui_app容器或使用supervisorctl update和supervisorctl restart x-ui来应用更改。
- 使用
- 容器权限问题:
- 现象: 服务因权限不足而无法启动(例如,"Operation not permitted" 错误)。
- 排查:
- 回顾
docker-compose.yml中的cap_add和security_opt设置。确保授予了必要的 Linux capabilities(如NET_ADMIN,SYS_ADMIN)。 - 如果使用了
seccomp:unconfined,评估其风险,并考虑是否可以移除或使用自定义 seccomp profile。 - 检查
supervisord.conf中各程序的user设置,确保它们有权限访问所需的文件和目录。
- 回顾
- 资源限制与性能问题:
- 现象: 服务响应缓慢,或容器因内存不足而被杀死 (OOMKilled)。
- 排查与解决:
- 在
docker-compose.yml中为服务设置资源限制,如deploy.resources.limits.cpus和deploy.resources.limits.memory。这可以防止单个服务耗尽主机资源。 - 使用
docker stats查看容器的实时 CPU 和内存使用情况。 - 定期检查日志文件大小,确保日志轮转配置生效。
- 考虑 ClawCloud Run 平台自身的资源限制,尤其是在使用免费计划时 [4]。如果资源不足,可能需要考虑升级计划或优化应用配置。
- 在
通过这一系列的配置、监控和故障排除策略,我们不仅确保了 3X-UI 服务在复杂环境下的成功部署,更为其长期的稳定运行提供了坚实的保障。这种系统化的运维思维,是任何生产级部署都不可或缺的组成部分。它要求我们从宏观的架构视角到微观的日志细节,都保持高度的警觉和掌控力,从而在面对挑战时能够迅速定位问题并采取有效的应对措施。
结论:在约束中创新,构建弹性云服务的深度思考
本报告深入剖析了在 ClawCloud Run 这一具有特定限制的云平台上,成功部署并稳定运行 3X-UI 代理服务管理面板的完整技术方案。通过对环境限制的细致分析、Docker 容器化技术的巧妙运用、`supervisord` 进程管理器的深度集成、以及 Cloudflare WARP 与 Cloudflare Tunnel 的协同赋能,我们不仅克服了平台固有的无特权环境、缺乏 IPv6 支持、域名绑定延迟等挑战,更构建了一个集安全性、稳定性与可访问性于一体的综合性解决方案。这一过程远非简单的技术堆砌,而是一次在资源受限条件下,通过系统性思维和创新性组合,实现复杂应用弹性部署的深度实践。核心成就与技术价值
本次部署方案的核心价值体现在以下几个层面:
- 克服环境限制的系统性方法论:面对 ClawCloud Run 的种种约束,我们没有采取回避或妥协的态度,而是将其视为一个优化和创新的契机。通过将问题分解——权限隔离、网络适配、服务暴露——并针对性地引入 Docker、WARP、Tunnel 等技术,我们形成了一套可复制、可推广的、在受限环境中部署复杂应用的系统性方法论。这证明了即使在资源非自由的条件下,通过精心的技术选型和架构设计,依然能够实现功能强大且稳定的服务。
- 容器内多进程管理的深度实践:方案中的一大亮点是使用
supervisord在单一 Docker 容器内管理多个相互依赖的系统级服务(D-Bus、WARP、3X-UI)。这不仅是对 Docker "一个容器一个进程" 最佳实践的有益补充和特定场景下的合理变通,更展示了在需要更高内聚性和复杂依赖管理的场景下,如何利用进程管理器来保证服务启动顺序、生命周期监控和故障恢复。这种精细化的进程编排能力,是提升容器化应用健壮性的关键。 - Cloudflare 生态的协同与增效:通过将 Cloudflare WARP 和 Cloudflare Tunnel 无缝集成到部署流程中,我们不仅解决了特定的网络问题,更赋予了整个系统前所未有的网络弹性。WARP 为出站流量提供了优化和隐私保护,而 Tunnel 则为入站访问提供了安全和便捷。这种对 Cloudflare 生态的深度利用,体现了现代云原生应用构建中,善用第三方专业服务来弥补平台不足、增强自身能力的趋势。
- 安全性与可维护性的平衡:在整个方案的设计中,我们始终关注安全性与可维护性的平衡。例如,在 Docker 容器中通过
cap_add精细授予权限而非盲目使用--privileged;通过supervisord和 Docker 的日志机制实现集中化的日志管理和轮转;通过docker-compose.yml的健康检查确保服务可用性。这些细节共同构成了一个既强大又易于管理的部署单元。
更深层次的洞察与启示
除了具体的技术实现,本案例还带给我们一些更深层次的启示:
- 抽象化与分层的重要性:成功的系统设计往往依赖于良好的抽象和分层。Docker 容器封装了应用及其运行环境,
supervisord封装了容器内的进程管理,Cloudflare Tunnel 封装了网络暴露的复杂性。每一层都为上层提供了一个简化的、可依赖的接口,使得整个系统的复杂性得以有效管理。 - 从"解决问题"到"优化体验"的思维转变:最初的部署目标可能仅仅是"让 3X-UI 在 ClawCloud Run 上跑起来"。但通过引入 WARP 和 Tunnel,我们不仅解决了基本问题,还进一步优化了网络体验(更快、更稳定的出站)和访问体验(更安全、更便捷的入站)。这种从满足基本需求到追求卓越体验的思维转变,是驱动技术方案持续优化的内在动力。
- 对平台限制的再认识:平台的限制,虽然在短期内可能被视为障碍,但从长远看,它们往往能激发出更具创新性和通用性的解决方案。正是 ClawCloud Run 的限制,促使我们探索并实践了这种高度依赖容器化和第三方网络服务的部署模式,而这种模式在许多其他云环境甚至私有服务器上都具有潜在的适用价值。
未来展望与潜在的演进方向
尽管当前的方案已经相当成熟和稳定,但技术的演进永无止境。未来,我们可以从以下几个方面进行探索和优化:
- 更精细的流量路由与分流:目前 WARP 提供了全局出站代理。未来可以探索利用 3X-UI/Xray 内置的强大路由功能,结合 WARP 的配置(如局域网排除、特定域名直连等),实现更精细的流量分流策略,例如,对特定类型的流量使用不同的出站代理或直连,以进一步优化性能和访问体验。
- 自动化部署与 CI/CD 集成:可以将整个部署流程(Docker 镜像构建、
docker-compose启动、Cloudflare Tunnel 配置等)脚本化,并集成到 CI/CD (持续集成/持续部署) 流水线中。这样,当 3X-UI 或其依赖组件有更新时,可以实现一键自动化重新部署和升级,大大提高运维效率。 - 监控告警的完善:目前主要依赖于手动查看日志和状态。未来可以引入更专业的监控告警系统(如 Prometheus + Grafana),对容器的资源使用、服务的健康状态、代理的流量情况等进行实时监控,并在出现异常时自动发送告警通知。
- 安全加固的持续投入:定期审查 Docker 镜像的安全漏洞,更新基础镜像和依赖包;研究更严格的 seccomp 和 AppArmor 配置文件,以在不影响功能的前提下进一步限制容器的系统调用能力;对 3X-UI 的访问进行更严格的认证和授权控制。
- 探索
systemd替代supervisord的可能性:如果基础镜像支持,并且对系统级服务的集成有更高要求,可以考虑使用systemd作为容器内的 init 系统和进程管理器。systemd在现代 Linux 发行版中已成为标准,其对服务的管理能力更为强大和原生。但这同样需要仔细权衡其复杂性和对镜像体积的影响。
总之,在 ClawCloud Run 上部署 3X-UI + Cloudflare WARP + Cloudflare Tunnel 的实践,不仅是一次成功的技术攻坚,更是一次关于如何在约束条件下进行创新性系统构建的宝贵经验。它充分展示了现代云原生技术栈的强大能力,以及通过巧妙组合不同技术组件来解决复杂工程问题的艺术。这套方案不仅为特定用户群体提供了切实可行的部署指南,更为广大开发者和运维人员在面临类似挑战时,提供了富有启发性的思路和借鉴。在快速演进的云计算时代,这种不断学习、适应、并最终驾驭环境限制的能力,将是每一位技术从业者不可或缺的核心素养。
为什么用cloudflare warp?因为最近在研究 https://github.com/masx200/warp-on-actions
所以顺手就用上cloudflare warp了.用来解决出站没有ipv6的问题
因为k8s环境没有权限创建虚拟网卡,所以要把warp设定为proxy模式!
warp-cli --accept-tos --verbose mode proxy
我发现ClawCloud Run的域名绑定时间实在是太久了一直在pending状态,最新好多人都说这个有问题了.可能是服务器压力太大了.所以只能使用cloudflare tunnel来解决了
https://linux.do/t/topic/641357/17
听说用cloudflare tunnel不消耗ClawCloud Run流量限制是真的吗?
image:docker.cnb.cool/masx200/docker_mirror/ubuntu-3x-ui-warp:2025-12-10-11-22-49
CPU 0.2 Core
Memory 1GB
Local Storage
/etc/x-ui
/etc/letsencrypt
/root/cert
/var/log
Estimated Cost (Day)
CPU $0.03
Memory $0.07
Storage $0.02
NodePorts $0.00
Total $0.12
image:cloudflare/cloudflared:latest
Memory Limit 128 Mi
CPU Limit 0.2 Core
Estimated Cost (Day)
CPU $0.03
Memory $0.01
Storage $0.00
NodePorts $0.00
Total $0.04
不妨去问问LLM 为什么要用3X-UI? 为什么不直接用wireguard出站呢? ~~AI写的大论文就别发了~~
Why not ask an LLM? Why use 3X-UI? Why not just use WireGuard for outbound traffic? ~~Don't bother posting AI-generated research papers~~
我发现ClawCloud Run的域名绑定时间实在是太久了一直在pending状态,最新好多人都说这个有问题了.可能是服务器压力太大了.所以只能使用cloudflare tunnel来解决了
https://linux.do/t/topic/641357/17
听说用cloudflare tunnel不消耗ClawCloud Run流量限制是真的吗?
I've noticed that domain binding for ClawCloud Run takes far too long and remains stuck in pending status. Many people are reporting issues with this recently. It might be due to server overload. So I had to resort to using Cloudflare Tunnel as a workaround.
https://linux.do/t/topic/641357/17
Is it true that using Cloudflare Tunnel doesn't count against ClawCloud Run's traffic limit?
为什么用cloudflare warp?因为最近在研究 https://github.com/masx200/warp-on-actions
所以顺手就用上cloudflare warp了.用来解决出站没有ipv6的问题
因为k8s环境没有权限创建虚拟网卡,所以要把warp设定为proxy模式!
warp-cli --accept-tos --verbose mode proxy
Why use Cloudflare Warp? Because I've been researching https://github.com/masx200/warp-on-actions
So I just used Cloudflare Warp to solve the issue of no IPv6 for outbound traffic.
Since the Kubernetes environment lacks permissions to create virtual network interfaces, Warp must be set to proxy mode!
服务端方面:
3x-ui对于新手比较友好,youtube上面这方面的视频实在是太多了. https://www.youtube.com/watch?v=WffyIuCy3Xw
客户端方面:
3x-ui还有一个优点,就是支持ech,彻底解决域名被封锁问题.!!!!!芜湖起飞!!可以把3x-ui安装到软路由里面.🚀🚀🚀🚀🚀
Server-side:
3x-ui is quite beginner-friendly, and there are tons of YouTube videos covering this. https://www.youtube.com/watch?v=WffyIuCy3Xw
Client-side:
Another major advantage of 3x-ui is its support for ECH, which completely solves domain blocking issues.!!!!! Woohoo take off!! You can install 3x-ui directly on your soft router. 🚀🚀🚀🚀🚀
{
"outbounds": [
{
"tag": "worker-1",
"protocol": "vless",
"settings": {
"vnext": [
{
"address": "###########", // fill in preferred CF IP
"port": 443,
"users": [
{
"id": "###########", // fill in your UUID
"encryption": "none"
}
]
}
]
},
"streamSettings": {
"network": "ws",
"wsSettings": {
"host": "###########.workers.dev", // workers domain supports ECH, so you can put the workers domain here
"path": "/?ed=2560"
},
"security": "tls",
"tlsSettings": {
"serverName": "###########.workers.dev", // workers domain supports ECH, so you can put the workers domain here
"allowInsecure": false,
"echConfigList": "gitlab.io+https://223.5.5.5/dns-query", // echConfigList should point to a domain that can fetch CF's ECH config + a DNS that can resolve it successfully
"echForceQuery": "full",
"fingerprint": "chrome"
}
}
}
]
}
Translation of #553 (comment) (abridged because of comment length limits)
Building Resilient Proxy Services in Restricted Cloud Environments: Deep Integration Practices with 3X-UI, Cloudflare WARP, and Tunnel
Introduction: Navigating the Complexity of Cloud-Native Deployments
The Docker image is already prepared
During deployment, configure the configmap to mount
supervisord.conf to /etc/supervisor/conf.d/supervisord.conf.
Deployment Parameters:
Command: Enter "bash -c"
Arguments: Enter "rm -frv /run/dbus/pid && mkdir -pv /var/log/supervisor/ && /supervisord/supervisord_static -c /etc/supervisor/conf.d/supervisord.conf"
docker.cnb.cool/masx200/docker_mirror/ubuntu-3x-ui-warp:2025-12-10-11-22-49
cloudflare/cloudflared:latest
In today's highly interconnected digital era, the demand for secure, efficient, and flexible network proxy and tunneling services continues to grow. Whether to safeguard data transmission privacy, bypass geographic restrictions, or construct complex network architectures, developers and system administrators are constantly seeking superior solutions. 3X-UI, as a powerful Xray core management panel, has gained popularity among users for its extensive support of multiple proxy protocols and user-friendly web interface [10] [11]. It enables users to effortlessly configure and manage VPNs and proxy servers through a streamlined interface, supporting mainstream protocols like VLESS, VMess, Trojan, and Shadowsocks, while offering advanced features such as traffic statistics, user management, and node subscriptions [12] . However, deploying such a powerful tool into specific cloud environments, especially those with numerous restrictions, often requires careful planning and clever workarounds. ClawCloud Run (run.claw.cloud) is such a platform. It offers a high-performance, lightweight cloud-native deployment environment similar to Kubernetes, integrated with GitOps workflows and native support for Docker/Kubernetes [1] . While it promises an uptime SLA of up to 99.975% and exceptional data reliability [2], inherent limitations in its free plan or certain configurations pose challenges for directly deploying complex applications like 3X-UI. These limitations primarily include: lack of privileged operation permissions (unable to create virtual network devices like TUN/TAP), no support for IPv6 networks, and potential prolonged unresponsiveness or latency issues with domain name binding services. If not properly addressed, these obstacles will directly impede the normal operation of 3X-UI and the execution of its core functions. This report aims to provide an in-depth analysis of how 3X-UI services can be successfully deployed and stably operated on constrained cloud platforms like ClawCloud Run through the integrated use of Docker containerization technology, Cloudflare WARP, and Cloudflare Tunnel. We will not only deliver an operational guide but also reveal the underlying design philosophy, technical selection considerations, and systems engineering principles demonstrated in overcoming environmental limitations. The report will detail the nature of environmental constraints and their impact on deployment strategies, explore how containerization serves as an effective isolation and permission circumvention mechanism, analyze how Cloudflare WARP provides stable outbound connectivity in restricted networks, and explain how Cloudflare Tunnel cleverly bypasses platform domain binding limitations. Furthermore, this report will focus on the complexity and stability of service management. It will delve into the advantages and practical details of using supervisord as a process manager to coordinate multiple dependent services within containers (such as D-Bus, WARP services, and 3X-UI itself). Through a layer-by-layer deconstruction of the entire deployment architecture and a meticulous interpretation of key configurations, we aim to present readers with a complete blueprint for building a highly available, highly resilient proxy service under resource-constrained conditions, offering practitioners facing similar challenges a reference with deep insights. This is not merely a technical tutorial but a deep case study in problem-solving, architectural design, and system optimization.
Breaking Free from Environmental Shackles: In-Depth Analysis of ClawCloud Run Platform Limitations and Mitigation Strategies
In any complex deployment task, the foremost and critical step is to thoroughly assess and understand the characteristics and limitations of the target runtime environment. ClawCloud Run, a platform designed to abstract Kubernetes complexity and deliver a simplified cloud-native application deployment experience [8], introduces certain constraints that must be acknowledged alongside its convenience. These constraints, particularly for applications requiring deep network configuration and system privileges (such as proxy server management panels like 3X-UI), constitute major obstacles during deployment. This section will delve into these core limitations and detail the countermeasures we designed to overcome them, collectively forming the foundation for the subsequent deployment strategy. First, the unprivileged environment stands as one of the most defining characteristics of platforms like ClawCloud Run. For security and multi-tenant isolation, these platforms typically prevent containerized processes from obtaining root privileges on the host machine or executing specific privileged operations. For 3X-UI, its dependent Xray core may require creating TUN/TAP virtual network devices under certain configurations or protocols to enable packet forwarding and processing. Direct creation of such devices is typically prohibited in unprivileged environments, leading to Xray core failure or functionality limitations. To address this challenge, Docker containerization emerged as our primary solution. By meticulously building a Docker image, we can pack 3X-UI and all its dependencies into a self-contained, portable unit. Although the container itself may run in non-privileged mode, Docker's layered filesystem and process isolation mechanisms provide the application with a relatively independent runtime environment. Furthermore, by correctly configuring users and permissions in the Dockerfile and adhering to Docker security best practices—such as using the USER instruction to switch to a non-root user for application execution [34] and leveraging Linux capabilities for granular permission control [35], we can create a functional runtime environment for 3X-UI without violating platform security policies. In certain scenarios, if platform policies permit, specific Linux capabilities (e.g., NET_ADMIN) may be added to containers to support essential network operations. This approach is generally more secure than directly using the --privileged flag, which should be avoided whenever possible due to its excessive granting of privileges and increased security risks. [36] [37].
Second, lack of IPv6 support is another explicit limitation of the ClawCloud Run platform. Although global IPv6 deployment is increasingly widespread, support for IPv6 may be incomplete or entirely absent in certain cloud environments, particularly simplified platforms or those targeting specific user groups. 3X-UI and its underlying Xray core are designed to support IPv6. If improperly configured, they may attempt to listen on IPv6 addresses or perform IPv6 routing, leading to unnecessary errors or functionality issues in non-IPv6 environments. Therefore, our mitigation strategy is to explicitly disable IPv6 functionality in all relevant configurations. This includes: Ensuring inbound and outbound rules in 3X-UI's panel settings exclude IPv6-related options; Removing or commenting out IPv6 listening and routing entries in Xray core configuration files; Disabling IPv6 via system-level controls (e.g., sysctls) in Docker container startup parameters or docker-compose.yml files. For example, adding sysctls: - net.ipv6.conf.all.disable_ipv6=1 to the service definition in docker-compose.yml ensures IPv6 is completely disabled within the container. This proactive approach avoids potential compatibility issues and guarantees stable service operation in pure IPv4 environments. This approach not only adapts to platform limitations but also serves as an effective means to simplify network configuration and reduce uncertainty in specific environments.
Third, domain name binding delays or unresponsiveness are issues frequently reported by users on ClawCloud Run. Traditional deployment methods typically require configuring DNS records on the cloud platform to point the domain name to the server's public IP address. This may also necessitate setting up a web server (like Nginx) on the server side for reverse proxy. If the platform's domain binding service experiences delays or failures, it directly impacts service accessibility and deployment efficiency. To bypass this potential bottleneck, we introduced Cloudflare Tunnel.
Cloudflare Tunnel is a powerful reverse proxy technology that enables users to securely expose services from local or private networks to the internet without opening inbound ports on firewalls or configuring public IP addresses [24]. It works by running a lightweight cloudflared client locally, which establishes an outbound, egress-only encrypted tunnel to Cloudflare's edge network. Traffic for specified domains is then routed to this tunnel via Cloudflare's DNS control panel. Consequently, all requests to these domains first traverse Cloudflare's global network before being securely forwarded through the tunnel to the 3X-UI service running on ClawCloud Run. This approach not only completely bypasses ClawCloud Run's own domain binding mechanism but also provides additional security benefits such as DDoS protection and SSL/TLS termination—value-added services offered by the Cloudflare network.
Finally, network restrictions and outbound connection stability are critical considerations when deploying services on cloud platforms. Certain platforms may impose limits on outbound traffic or experience unstable/slow connections due to complex network paths. This is especially vital for proxy services requiring active connections to external networks. To ensure the 3X-UI service maintains a stable and reliable outbound connection, we integrated Cloudflare WARP. Cloudflare WARP is a lightweight client that securely routes device traffic through Cloudflare's global network [26] using WireGuard or MASQUE protocols. By deploying and connecting WARP within the container running 3X-UI, we route all outbound traffic—including traffic forwarded by the proxy server to target services—through Cloudflare's optimized network. This not only improves connection quality and access speeds but also enhances traffic privacy and anti-blocking capabilities to a certain extent. WARP clients on Linux systems typically rely on the D-Bus messaging bus for communication and configuration management. This introduces additional complexity when deploying WARP in containerized environments, as containers may not include or correctly configure D-Bus services by default. This challenge will be addressed in detail later when discussing the use of supervisord to manage multi-process containers.
In summary, the limitations of the ClawCloud Run platform—unprivileged environment, lack of IPv6 support, domain binding issues, and potential network instability—collectively create a suboptimal deployment scenario. However, by combining Docker containerization, explicit IPv6 disabling, Cloudflare Tunnel, and Cloudflare WARP, we have constructed a multi-layered, synergistic solution. This approach not only effectively overcomes each individual constraint but also sees components complementing one another to collectively enhance the system's robustness, security, and accessibility. This transformation of challenges into opportunities for adopting more advanced and flexible technology combinations embodies the essence of modern cloud-native engineering practices. Subsequent sections will detail how to translate these strategies into concrete, actionable deployment steps and configurations.
Building a Solid Foundation: Docker Containerization Deployment and Service Management Optimization
Before delving into specific deployment details, we must first establish a core construction principle: in any cloud environment, especially platforms with numerous constraints like ClawCloud Run, a well-designed containerization strategy is crucial for ensuring successful application deployment and stable operation. Docker containerization not only provides encapsulation and isolation for applications but, more importantly, offers a powerful tool for addressing platform limitations, standardizing deployment processes, and effectively managing complex dependencies. This chapter focuses on building a robust, reliable, and maintainable operational foundation for 3X-UI, Cloudflare WARP, and subsequent Cloudflare Tunnel integration. This is achieved by optimizing Docker image builds, introducing supervisord as the process manager, and meticulously designing docker-compose.yml configurations.
We'll begin with the design philosophy of Dockerfiles, gradually transitioning to the complex topic of efficiently managing multiple collaborative services within a single container—one of the core technical highlights of this deployment solution.
Evolution of Dockerfiles: From Basics to Integration
The initial Dockerfile design, as provided by the Thinking Assistant in its first iteration, was based on Alpine Linux. Renowned for its compact size and security, Alpine is a popular choice for building lightweight images. This version included steps necessary for building 3X-UI, such as installing the Go language environment, compiling source code, copying executables, and attempting to install Cloudflare WARP within the Alpine environment. However, Alpine Linux uses musl libc as its standard C library, which may exhibit differences in binary compatibility compared to the more common glibc. More importantly, the Cloudflare WARP client (particularly the warp-svc service) may have stronger dependencies on glibc in certain scenarios, or its officially provided installation packets primarily target glibc-based distributions like Debian/Ubuntu. Consequently, installing and running WARP on Alpine may present additional challenges, such as requiring manual dependency resolution or certain features potentially failing to function correctly. Furthermore, the original Dockerfile uses CMD ["/bin/sh", "-c", "warp-cli --accept-tos register && warp-cli --accept-tos register && warp-cli --accept-tos connect && ./x-ui" ] to start the service. While simple, this approach has significant drawbacks: it chains WARP's configuration connection with 3X-UI's startup into a single shell command. If the warp-cli connect command blocks or fails, or if managing multiple independent background services becomes necessary later, this simple shell command chain proves inadequate, making fine-grained control and fault recovery difficult.
Given these considerations, a more optimized approach, as proposed in subsequent improvements by Thinking Assistant, is to adopt Ubuntu 22.04 as the base image. Ubuntu boasts extensive software repositories and community support, with its glibc environment offering superior compatibility with most mainstream software (including Cloudflare WARP). Using Ubuntu significantly simplifies the WARP installation process—typically requiring only adding the software source per Cloudflare's official documentation and executing apt-get install—greatly reducing deployment failures due to environment issues. While migrating from Alpine to Ubuntu increases the final image size, compatibility and deployment convenience often outweigh extreme image compression in scenarios requiring multiple complex system-level services. This trade-off demonstrates that in practice, the "best" choice is determined by specific application contexts and requirements, rather than blindly pursuing a single metric.
The Agony of Process Management: The Introduction and Value of supervisord
In containerized deployment practices, a widely advocated best practice is "one container, one main process" [40] [42]. This approach helps maintain container simplicity and leverages Docker's built-in process management mechanisms (such as automatic container restart) to ensure service availability. However, this best practice faces challenges when we need to run multiple interdependent or independently managed background services within a single container (e.g., D-Bus, Cloudflare WARP service warp-svc, and 3X-UI itself in this solution). Simply launching multiple processes in the background via shell scripts makes it difficult to effectively monitor their status, handle unexpected process exits, and manage their startup order and dependencies. This is precisely where a process manager shines. supervisord is a Python-based client/server system that enables users to monitor and control multiple processes on UNIX-like operating systems [44]. It provides a unified mechanism for starting, stopping, and restarting processes, and can be configured for automatic process restart, log rotation, and more, making it ideal for managing multiple services within Docker containers. [47]. In our deployment strategy, supervisord plays a crucial role by resolving the challenge of coordinating multiple complex services within a single container, ensuring overall system stability and maintainability.
Fine-Tuning supervisord.conf: Service Orchestration and Dependency Management
The heart of supervisord lies in its configuration file (typically supervisord.conf), which defines all programs requiring management and their behaviors. A meticulously crafted supervisord.conf clearly maps service dependencies and startup sequences, ensuring the system initializes as intended. Below is an example supervisord.conf tailored to our scenario, accompanied by an in-depth analysis of its key configurations:
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0700
[supervisord]
logfile=/var/log/supervisor/supervisord.log,/dev/stdout
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/var/run/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
user=root
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
# 优先级 100:先启动 dbus(WARP 的依赖)
[program:dbus]
command=/usr/bin/dbus-daemon --config-file=/usr/share/dbus-1/system.conf --system
autostart=true
autorestart=true
startsecs=0
startretries=0
stdout_logfile=/var/log/supervisor/dbus.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/dbus.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=100
# 优先级 200:然后启动 WARP 服务
[program:warp-svc]
command=/bin/warp-svc --accept-tos
autostart=true
autorestart=true
startsecs=5
startretries=10
stdout_logfile=/var/log/supervisor/warp-svc.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/warp-svc.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=200
environment=HOME="/root",USER="root"
# 优先级 250:WARP 初始化配置 (一次性任务)
[program:warp-init]
command=/app/init-warp.sh
autostart=true
autorestart=false
startsecs=10
startretries=3
stdout_logfile=/var/log/supervisor/warp-init.log,/dev/stdout
stderr_logfile=/var/log/supervisor/warp-init.err.log,/dev/stderr
priority=250
depends_on=dbus,warp-svc
# 优先级 300:最后启动 x-ui(确保依赖服务已就绪)
[program:x-ui]
command=/x-ui/x-ui-linux-amd64/x-ui/x-ui
environment=XRAY_VMESS_AEAD_FORCED="false",XUI_ENABLE_FAIL2BAN="false"
directory=/x-ui/x-ui-linux-amd64/x-ui
autostart=true
autorestart=true
startsecs=5
startretries=10
stdout_logfile=/var/log/supervisor/x-ui.log,/dev/stdout
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/x-ui.err.log,/dev/stderr
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
priority=300
Configuration Deep Dive:
[supervisord]Section:- *
logfile=/var/log/supervisor/supervisord.log,/dev/stdout: Writessupervisord's own logs to both a file and standard output. Writing to standard output is crucial for Docker containers, as it allows Docker log drivers (likejson-file) to capture and manage these logs, making them viewable via thedocker logscommand. * nodaemon=false: This is a critical setting. It instructssupervisordto run as a daemon. In Docker containers, we typically runsupervisordas the container's main process (PID 1). Setting it totrue(non-daemon mode) runssupervisordin the foreground, which also works, but daemon mode aligns better with its intended design. As long assupervisordis the last command executed when the container starts, it can properly manage child processes and receive signals.user=root: Specifies thatsupervisordand the programs it manages run as the root user by default. In our scenario, root privileges are necessary to start system services like dbus and potentially WARP, which may require network access. If security policies permit and the application supports it, consider using theuserdirective in specific program configurations to run with reduced privileges.
- *
[program:dbus]section:command=/usr/bin/dbus-daemon --config-file=/usr/share/dbus-1/system.conf --system: Starts the D-Bus system daemon. The Cloudflare WARP client (warp-svc) relies on D-Bus for inter-process communication and configuration management. Therefore, D-Bus must start and run successfully before WARP.priority=100:supervisordlaunches programs in priority order (lower numbers indicate higher priority). Setting D-Bus's priority to the lowest value of 100 ensures it is the first critical service to start.startsecs=0,startretries=0:startsecsspecifies how many seconds a process must remain running after startup to be considered successfully launched. For system services like D-Bus, this can be set to 0 if the startup command itself is non-blocking.startretries=0means no retries will occur if startup fails, as D-Bus failure typically indicates a fundamental system environment issue where retries are futile.
[program:warp-svc]Section:command=/bin/warp-svc --accept-tos: Starts the Cloudflare WARP background service. The--accept-tosparameter automatically accepts the terms of service, which is necessary for automated deployments.priority=200: Sets a priority higher than D-Bus, ensuring WARP starts after D-Bus has launched.startsecs=5: Allows the WARP service 5 seconds to start. If the process is still running after 5 seconds, startup is considered successful.startretries=10: Retries startup up to 10 times if it fails, increasing service resilience.environment=HOME="/root",USER="root": Sets necessary environment variables for thewarp-svcprocess. Some WARP versions may require these variables to correctly locate configuration files or data directories.
[program:warp-init]Section (WARP Initialization Script):- This is a one-time task that performs WARP registration and connection operations after
warp-svcstarts. command=/app/init-warp.sh: Points to a custom shell script, whose contents might look like this:
- This is a one-time task that performs WARP registration and connection operations after
#!/bin/bash
echo "Initializing Cloudflare WARP..."
# Waiting for dbus and warp-svc to fully start
sleep 5
# Register for WARP
warp-cli --accept-tos registration new
# Connect to WARP
warp-cli --accept-tos connect
# Check the connection status.
for i in {1..30}; do
if warp-cli --accept-tos status | grep -q "Connected"; then
echo "WARP connection established!"
exit 0
fi
echo "Waiting for WARP connection... ($i/30)"
sleep 2
done
echo "WARP connection timed out or failed!"
exit 1
autostart=true,autorestart=false:autostartensures the script executes.autorestart=falseis crucial because this is a one-time initialization script that should not be automatically restarted bysupervisordupon success or failure.priority=250: Executes afterwarp-svc.depends_on=dbus,warp-svc: A powerful feature ofsupervisord. It explicitly specifies that thewarp-initprogram depends on the RUNNING state of thedbusandwarp-svcprograms.warp-initwill only execute after both dependent services have successfully started. This ensures initialization occurs at the correct time.
[program:x-ui]section:
command=/app/x-ui: Launches the 3X-UI main program.environment=XRAY_VMESS_AEAD_FORCED="false",XUI_ENABLE_FAIL2BAN=‘false’: Sets environment variables for 3X-UI.XUI_ENABLE_FAIL2BAN="false"is a useful setting in restricted environments, as fail2ban may require additional system privileges and configuration.priority=300: Highest priority ensures 3X-UI starts last after all dependencies (D-Bus, WARP service, WARP initialization) are ready.startsecs=5,startretries=10: Similar configuration to WARP service, granting 3X-UI sufficient startup time and retry attempts.
Through this meticulous supervisord.conf configuration, we not only achieve running multiple complex services within a single container, but more importantly, we establish a clear startup sequence and dependency graph. supervisord launches programs in ascending order of priority, and only starts a program when all programs in its depends_on list are in the RUNNING state. This mechanism significantly enhances the reliability and predictability of the entire system startup. If a service (like D-Bus) fails to start, supervisord logs the error, and all services dependent on it (WARP, 3X-UI) will not be launched, preventing the system from entering an indeterminate or partially functional state.
Synergy with docker-compose.yml
While supervisord handles process management within containers, the docker-compose.yml file defines container behavior, resource limits, network configuration, and inter-container communication (if multiple containers are involved) at a higher level. Below is an example docker-compose.yml for our single-container deployment:
version: '3.8'
services:
3x-ui-app:
build:
context: .
dockerfile: Dockerfile # Point to the Dockerfile containing supervisord
container_name: 3xui_app
volumes:
- ./db/:/etc/x-ui/ # Persist the 3X-UI Database and Configuration
- ./cert/:/root/cert/ # Persist the SSL certificate (if self-signed or for specific requirements)
- ./supervisor_logs/:/var/log/supervisor/ # Optional: Persist logs for services managed by supervisord
environment:
- TZ=Asia/Shanghai
# XRAY_VMESS_AEAD_FORCED and XUI_ENABLE_FAIL2BAN can also be configured here, or within the "x-ui program" section of supervisord.conf.
ports:
- "2053:2053" # Map the host's port 2053 to the container's port 2053 (3X-UI Web Panel)
cap_add:
- NET_ADMIN # Certain network operations may be required for the Xray core
- SYS_ADMIN # WARP or certain system-level operations may require
security_opt:
- seccomp:unconfined # May be required under certain strict security configurations, but risks should be carefully evaluated
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:2053"] # Check whether the 3X-UI web panel is accessible
interval: 30s
timeout: 10s
retries: 3
start_period: 60s # Allow sufficient time for supervisord and all services to start
# Cloudflare Tunnel can run as a separate standalone service or be managed on the host via systemd or similar tools.
# If running as a container and needing to communicate with the 3x-ui-app container, Docker networking must be configured.
# This example uses a standalone container, assuming tunnel accesses 3x-ui via localhost (requires host network or other configuration).
cloudflare-tunnel:
image: cloudflare/cloudflared:latest
container_name: cf-tunnel
command: tunnel --config /etc/cloudflared/config.yml run
volumes:
- ./cloudflared/:/etc/cloudflared/ # Includes tunnel-id.json and config.yml
restart: unless-stopped
# depends_on:
# - 3x-ui-app # If the tunnel needs to wait for 3x-ui to fully start, you can use depends_on
# However, cloudflared itself has a reconnection mechanism, so it is not essential
docker-compose.yml* Key Configuration Breakdown:***
build: Specifies the build context and Dockerfile path for the Docker image.volumes: The core for data persistence../db/:/etc/x-ui/: Mounts 3X-UI's configuration files and database (typically located in the/etc/x-ui/directory) to the host's./dbdirectory. This ensures all 3X-UI settings and user data remain intact even if the container is destroyed and rebuilt../cert/:/root/cert/: If custom SSL certificates are required (e.g., for 3X-UI's web panel needing HTTPS or certain proxy protocols requiring certificates), mount them here../supervisor_logs/:/var/log/supervisor/: Optional but highly recommended for debugging. Persists logs fromsupervisordand all services it manages to the host, facilitating detailed analysis after container issues.
ports: Exposes the container's ports to the host. Here we use port 2053, a relatively uncommon port that can sometimes bypass restrictions or interference on common ports in basic network environments.cap_add: Adds Linux capabilities to the container.NET_ADMIN: Enables network administration tasks like configuring firewall rules or creating network tunnels. Xray Core may require this capability under certain configurations.SYS_ADMIN: Permits extensive system administration operations. Cloudflare WARP may need this capability for proper installation and operation, especially when modifying system network configurations or interacting with kernel modules. Adding capabilities is more secure than using--privileged, but exercise caution and grant only necessary permissions.
security_opt: Security options.seccomp:unconfined: Disables the default seccomp filter. Seccomp (secure computing mode) is a Linux kernel feature that restricts system calls a process can execute. In certain scenarios, WARP or Xray may require system calls blocked by the default seccomp profile. Setting this tounconfinedremoves these restrictions but reduces security. This should be a last resort, with risks carefully evaluated. Where possible, attempt to create a custom, more permissive seccomp profile.
restart: unless-stopped: Ensures the container automatically restarts after exit or restart, unless manually stopped.healthcheck: Defines the container's health check. Docker periodically executes the command specified intest. If the command fails consecutively forretriestimes, the container is marked asunhealthy. This is very useful for monitoring the actual availability of services.start_period: 60s: Healthcheck failures within the first 60 seconds after container startup do not count toward retry attempts. This provides sufficient startup time forsupervisordand all services it manages (D-Bus, WARP, 3X-UI), preventing false positives due to lengthy initialization.
We have now established a solid foundation. Using an Ubuntu-based Docker image integrated with supervisord and a carefully configured docker-compose.yml file, we have successfully encapsulated 3X-UI, Cloudflare WARP, and its dependent D-Bus services into a unified management unit. supervisord ensures the correct startup sequence, dependency management, and lifecycle control of these services within the container. This self-contained, highly automated unit paves the way for the next phase: integrating Cloudflare Tunnel to achieve a complete, publicly accessible proxy service. This attention to detail and deep tool integration are crucial for tackling complex deployment challenges and ensuring long-term stable operation.
Cutting Through the Fog: Synergistic Empowerment of Cloudflare WARP and Tunnel
After successfully building an internally stable Docker container integrating 3X-UI and Cloudflare WARP, the next critical step is securely and reliably exposing this internal service to the public internet while ensuring its outbound traffic is equally optimized and protected. This is precisely where two other powerful tools from the Cloudflare family—Cloudflare WARP and Cloudflare Tunnel—work synergistically. Together, they form the "network layer" of our solution, handling all inbound and outbound traffic to provide robust networking capabilities and security for the 3X-UI service running within the restricted ClawCloud Run environment. Cloudflare WARP primarily optimizes and secures outbound connections, ensuring containers reliably access external resources. Cloudflare Tunnel, meanwhile, elegantly solves inbound access challenges by bypassing the platform's own domain binding restrictions and adding an extra layer of security for the service. Understanding how these two operate independently and complement each other is crucial to grasping the essence of the entire deployment strategy.
Cloudflare WARP: Building Robust Outbound Connections
Cloudflare WARP is essentially a lightweight VPN client designed for personal devices. It enhances user privacy, security, and performance by routing traffic through secure WireGuard or MASQUE tunnels to Cloudflare's global edge network [26]. In our deployment, WARP assumes a new role: serving as the outbound gateway for the 3X-UI proxy service running within ClawCloud Run containers. This means all client traffic forwarded by 3X-UI passes through Cloudflare WARP before leaving the ClawCloud Run network and reaching its final destination. This design delivers multiple significant advantages. First, enhanced connection stability and performance. As a cloud platform, ClawCloud Run's outbound network quality can be influenced by factors like routing optimization and international bandwidth constraints. By routing traffic through WARP, it leverages Cloudflare's highly optimized global backbone network. This typically translates to lower latency, higher throughput, and more stable connections—particularly noticeable when accessing international resources. Second, enhanced privacy and anti-blocking capabilities. WARP's outbound traffic originates from Cloudflare's IP address pool, partially obscuring the true source IP of ClawCloud Run servers and providing an additional layer of privacy protection for the proxy itself. Simultaneously, due to Cloudflare's network scale and reputation, its IP addresses face a relatively lower risk of being blocked, thereby improving proxy service availability. Third, simplified network configuration. In complex network environments, manual proxy or routing rules may be required to ensure outgoing traffic flows correctly. WARP automates this by creating a virtual network interface at the operating system level and automatically configuring routing. This directs all outgoing traffic (or specific traffic) through its tunnel by default, simplifying network management within containers.
However, integrating Cloudflare WARP into Docker containers is not without challenges. As mentioned earlier, the WARP client (particularly the warp-svc background service) typically relies on D-Bus (Desktop Bus) for system-level communication, state management, and policy configuration on Linux systems. D-Bus is a message bus system that enables applications to communicate and exchange information. In standard Linux distributions, D-Bus usually runs automatically as a system service. However, in a minimal Docker container, the D-Bus service is absent by default. Therefore, to successfully run WARP, we must first start a D-Bus daemon within the container. This is precisely what we achieved in the previous section by configuring [program:dbus] within supervisord. supervisord ensures that D-Bus (dbus-daemon --system) starts before the WARP service (warp-svc), thereby satisfying WARP's runtime dependency. After warp-svc successfully starts, the warp-cli register and warp-cli connect commands must be executed to activate the WARP connection. These operations typically need to be performed only once during initial setup or reset. We handle these one-time tasks through a dedicated warp-init script and its corresponding [program:warp-init] configuration, using depends_on to ensure execution after warp-svc succeeds. This meticulously orchestrated multi-process startup sequence via supervisord is key to overcoming WARP deployment complexities in containerized environments. Once the WARP connection succeeds, all outbound IP traffic from the container (except traffic destined for Cloudflare edges to maintain the WARP tunnel itself) is routed through the WARP tunnel. This means that when the Xray core within 3X-UI forwards user data, its source IP address will be one of Cloudflare's egress IPs, not the IP of the ClawCloud Run server. This transparent proxy behavior allows 3X-UI to benefit from WARP without requiring special outbound configuration. Of course, if finer-grained traffic control is needed (e.g., routing some traffic through WARP while others connect directly), corresponding configurations must be set in Xray's outbound rules.
Cloudflare Tunnel: Securely and Seamlessly Expose Inbound Services
After optimizing outbound connections, our next challenge is enabling users to access the 3X-UI web management panel running within ClawCloud Run containers from the public internet. This also applies to any proxy services potentially provided by 3X-UI, assuming the proxy protocol itself requires access via a public domain name. The traditional approach involves configuring domain resolution within the ClawCloud Run control panel, pointing an A record to the server's public IP. Then, setting up a reverse proxy like Nginx on the server to forward domain traffic to the locally running 3X-UI service. However, as mentioned in the introduction, ClawCloud Run's domain binding service may experience delays or unresponsiveness, rendering this traditional method unreliable. Cloudflare Tunnel offers a revolutionary alternative. It functions as an outbound-only reverse proxy tunnel [21]. This means you don't need to open any inbound ports on your server or configure a public IP. Instead, you run a lightweight client called cloudflared on your server. It actively establishes an encrypted, persistent outbound connection from your server to Cloudflare's edge network. Then, simply point the CNAME record for your domain (e.g., 3x-ui.yourdomain.com) to the tunnel endpoint generated by Cloudflare in the Cloudflare DNS control panel. When users access https://3x-ui.yourdomain.com, requests first reach Cloudflare's edge servers and are then forwarded through the established secure tunnel to the local service specified by the cloudflared client on your server (e.g., http://localhost:2053, the 3X-UI web dashboard).
Deploying Cloudflare Tunnel typically involves the following steps, which can be completed on a local development machine or any internet-accessible location. The generated configuration file and credentials are then uploaded to the ClawCloud Run server:
- Install
cloudflared: Download and install thecloudflaredclient according to your server's operating system. - Authenticate: Run
cloudflared tunnel login. This opens a browser window to authorizecloudflaredto access your Cloudflare domain. - Create a tunnel: Run
cloudflared tunnel create <tunnel-name>(e.g.,cloudflared tunnel create 3x-ui-tunnel). Cloudflare will generate a unique tunnel ID and a JSON-formatted credentials file. This credential file is critical and must be stored securely. Thecloudflaredclient will later use it to authenticate and connect to this tunnel. - Configure the tunnel: Create a YAML configuration file (e.g.,
~/.cloudflared/config.yml) to define how the tunnel routes incoming traffic to local services. A basic configuration looks like this:
tunnel: <your-tunnel-uuid> # The tunnel UUID obtained from the previous step
credentials-file: /path/to/your/tunnel-credentials.json # Path to the document
ingress:
- hostname: 3x-ui.yourdomain.com
service: http://localhost:2053 # Forward traffic to the 3X-UI Web Panel on local port 2053.
- service: http_status:404 # All other requests return a 404 error
This configuration instructs Cloudflare Tunnel to proxy all traffic destined for 3x-ui.yourdomain.com to port 2053 on the local machine.
- Create DNS Record: Execute
cloudflared tunnel route dns <tunnel-name> <hostname>(e.g.,cloudflared tunnel route dns 3x-ui-tunnel 3x-ui.yourdomain.com). This automatically creates a CNAME record in your Cloudflare DNS zone, pointing your domain to the tunnel. - Run the tunnel: Finally, execute
cloudflared tunnel --config /path/to/config.yml runon the server to start the tunnel client.
In our docker-compose.yml example, Cloudflare Tunnel is designed as a separate Docker service (cloudflare-tunnel). This approach offers separation of concerns: 3X-UI and its WARP outbound traffic are managed within one container, while the Tunnel's inbound proxy runs in another. They communicate via Docker's default network or, if the host network mode is configured (within the 3x-ui-app service), cloudflared can directly access 3X-UI via localhost. The cloudflared container accesses the ./cloudflared/ directory containing tunnel credentials and configuration via a volume mount. The restart: unless-stopped policy ensures high availability for the tunnel service.
Cloudflare Tunnel offers multiple advantages. It perfectly circumvents potential domain binding issues on the ClawCloud Run platform, as we no longer rely on the platform to handle DNS or port forwarding. It significantly enhances security since no inbound ports need to be opened on the server; all inbound traffic is filtered and protected by Cloudflare's edge WAF (Web Application Firewall), and the tunnel itself is encrypted. It also simplifies SSL/TLS certificate management. You can enable Cloudflare's free wildcard certificate for your domain within the Cloudflare control panel. Cloudflare terminates the SSL connection at its edge servers and forwards traffic to your local service as HTTP (or you can configure end-to-end encryption).
Synergy Between WARP and Tunnel
Combining Cloudflare WARP and Cloudflare Tunnel creates a robust network architecture that delivers comprehensive network optimization and protection for 3X-UI services deployed on ClawCloud Run. WARP handles all "outbound" traffic, ensuring the 3X-UI proxy server maintains a stable, fast, and privacy-protected connection when accessing the external world. Tunnel handles all "inbound" traffic, providing users with a secure, reliable way to access the 3X-UI management panel without worrying about ClawCloud Run's network limitations. Working together, these two components place the 3X-UI service within a "greenhouse" meticulously maintained by Cloudflare's global network: externally, it possesses a premium egress point via WARP; internally, it maintains a secure entry point through Tunnel. This architecture not only resolves platform-specific limitations but also establishes a universal, highly resilient, and secure service deployment model. It abstracts complex network configurations and management, allowing developers to focus on application functionality rather than underlying network challenges. Through this deep integration and synergy, we successfully transformed a complex application—originally difficult to deploy and manage in a restricted environment—into a stable, secure, and easily accessible cloud-native service.
Pursuing Excellence: 3X-UI Panel Configuration, Operations, and Troubleshooting
After completing the underlying infrastructure—namely, Docker containers integrated with 3X-UI and Cloudflare WARP, managed via supervisord, alongside Cloudflare Tunnel handling inbound traffic—our focus shifted to application-layer configuration, long-term operational maintenance, and inevitable troubleshooting. 3X-UI itself provides a feature-rich web interface enabling granular management of proxy protocols, ports, users, and more. However, achieving optimal performance and stability within our unique, multi-layered technical stack environment requires specific configurations. Simultaneously, a robust deployment strategy must incorporate effective monitoring, log management, and fault response mechanisms to ensure long-term service availability. This chapter will delve into the critical configuration settings of the 3X-UI panel, provide a practical operations monitoring guide, and offer insightful analysis and solutions for common issues that may arise in this complex deployment.
Targeted Configuration for the 3X-UI Panel
Once the entire system successfully launches via docker-compose up -d and Cloudflare Tunnel is operational, you can access the 3X-UI web management panel using the previously configured domain (e.g., https://3x-ui.yourdomain.com).
After initial login, performing targeted configurations is essential to ensure alignment with our deployment environment (no IPv6, WARP outbound) and the selected port (2053).
- Inbound Settings:
- Port Selection: When creating inbound rules, ensure the listening port matches the one mapped in
docker-compose.yml—port 2053. While 3X-UI itself can listen on any port, only the port mapped via Docker will be accessible from outside the container (including through Cloudflare Tunnel). Choosing a non-standard port like 2053 can sometimes circumvent blocking or interference targeting common proxy ports (e.g., 80, 443, 8080) in certain network environments. - Protocol Selection: 3X-UI supports multiple protocols including VLESS, VMess, Trojan, Shadowsocks, etc. Given our operation in a pure IPv4 environment, ensure the selected protocol and its configuration do not depend on IPv6. VLESS and Trojan are relatively newer and higher-performance protocols that typically work well with WARP outbound.
- *Disable IPv6: Within detailed inbound rule configurations, if any IPv6-related options exist (e.g., "listen on IPv6" or "allow IPv6 clients"), ensure they are disabled. This prevents 3X-UI from attempting to process IPv6 traffic in non-IPv6-supported environments, reducing potential errors and resource wastage. *
- Transport Settings: For scenarios requiring bypassing network censorship or complex firewalls, different transport layers can be configured, such as WebSocket (ws), gRPC, TCP, etc. WebSocket and gRPC typically offer better camouflage due to their traffic characteristics resembling standard HTTPS traffic. If these transport methods are chosen and you wish to carry this traffic via Cloudflare Tunnel, Cloudflare generally proxies them effectively.
- Port Selection: When creating inbound rules, ensure the listening port matches the one mapped in
- WARP Integration (In-Panel Configuration):
- While we've already launched Cloudflare WARP at the system level via
supervisordand provided a proxy for all outbound traffic, the 3X-UI panel itself may also offer some WARP integration options. These options may allow you to specify WARP license keys (if you have a paid version for additional features or bandwidth) or view WARP connection status within the panel. If the panel offers such functionality, configure it as needed. However, the core WARP connection is ensured by our underlyingwarp-svcandwarp-cliservices, which do not depend on panel configurations.
- While we've already launched Cloudflare WARP at the system level via
- Outbound Settings:
- Default Outbound: Within 3X-UI's outbound rule configuration, you can set a default outbound proxy. Since we've already implemented a global outbound proxy via system-level WARP, this "Default Outbound" can be set to "direct". All traffic leaving the container is already handled by WARP.
- Advanced Routing and Traffic Splitting: 3X-UI's strength lies in its flexible routing capabilities. You can create custom outbound rules to split traffic to different outbound proxies based on conditions like domain names, IP addresses, or GeoIP. For example, you can configure direct connections for domestic websites while routing international sites through a specific proxy (which could be another 3X-UI inbound proxy or via WARP). In our deployment, since WARP is already the global outbound proxy, achieving finer control (e.g., routing some traffic bypassing WARP) requires meticulous planning of outbound rules within the Xray core configuration. This may also necessitate adjusting WARP settings (e.g., using WARP's proxy mode instead of VPN mode, or configuring split tunneling). However, for most use cases, global WARP outbound already provides good performance and privacy protection.
Operations Monitoring and Log Management
A robust system relies on effective monitoring and log management. For our deployment solution, monitoring can be conducted at multiple levels:
- Container-level monitoring:
docker ps -a: View the status of all containers, ensuring3xui_appandcf-tunnelare both inUpstatus.docker logs <container_name/id>: View the container's standard output/error logs. For the3xui_appcontainer, this displayssupervisord's own logs along with logs from all services managed bysupervisord(if their logs are redirected to stdout/stderr, as shown in oursupervisord.confconfiguration).docker-compose logs: Run this command in the directory containingdocker-compose.ymlto view logs from all services managed bydocker-compose. Add the-fflag to monitor log output in real-time.
supervisord* Internal Service Monitoring**:*- Enter the
3xui_appcontainer:docker exec -it 3xui_app bash. - *Inside the container, use the
supervisorctl statuscommand to view the detailed operational status (RUNNING,STARTING,STOPPED,EXITED,FATAL, etc.) of each process managed bysupervisord(dbus,warp-svc,warp-init,x-ui) in detail, such asRUNNING,STARTING,STOPPED,EXITED,FATAL, etc. This is the core tool for diagnosing issues with services inside the container. * supervisorctl tail <program_name>: Allows real-time viewing of the standard output logs for a specific service. For example,supervisorctl tail x-ui.supervisorctl restart <program_name>: Restarts a specific service individually without restarting the entire container. This is highly useful for quickly recovering a failed service.
- Enter the
- Cloudflare WARP Status Check:
- *Within the
3xui_appcontainer, executewarp-cli --accept-tos statusto view detailed WARP connection status, including connection status, connection type (WireGuard/MASQUE), acquired IP address, etc. *
- *Within the
- Cloudflare Tunnel Status Check:
- View logs from the
cf-tunnelcontainer:docker logs cf-tunnel. - On Cloudflare's dashboard, navigate to Zero Trust -> Networks -> Tunnels to view tunnel connection status, health checks, and traffic statistics.
- View logs from the
- Log Rotation and Persistence:
- As mentioned earlier, configure log rotation options (e.g.,
max-size,max-file) indocker-compose.ymlto prevent individual log files from growing indefinitely and consuming excessive disk space. - Persisting logs from services managed by
supervisord(via volume mount./supervisor_logs/:/var/log/supervisor/) to the host is an excellent practice. This ensures historical logs remain accessible even after container deletion, facilitating post-analysis or troubleshooting.
- As mentioned earlier, configure log rotation options (e.g.,
Common Issues and In-Depth Solutions
Despite careful design, various problems may still arise during actual operation. Below are some common issues and their potential solutions:
- Domain Binding/Access Issues (Cloudflare Tunnel Related):
- Symptom: Unable to access the 3X-UI panel via the configured domain.
- Troubleshooting:
- Verify the
cf-tunnelcontainer is running (docker ps). - Review the
cf-tunnelcontainer logs (docker logs cf-tunnel) for connection errors or configuration warnings. - Verify that the CNAME record in Cloudflare DNS correctly points to the tunnel.
- In the Cloudflare dashboard's Tunnel settings, confirm the tunnel status displays as "Healthy".
- *Verify the
hostnameandserviceconfigurations inconfig.ymlare correct. Theserviceshould point to the 3X-UI service address within the3xui_appcontainer. If both containers are on the same default Docker network, this should behttp://3x-ui-app:2053(using the Docker service name as the hostname). If3xui_appusesnetwork_mode: host, it should behttp://localhost:2053. * - Check Cloudflare's SSL/TLS encryption mode (under DNS/SSL/TLS -> Overview). If set to "Full (Strict)", your local service must also provide a valid SSL certificate. For beginners, "Flexible" (HTTPS from Cloudflare to visitors, HTTP from Cloudflare to your server) or "Full" (HTTPS from Cloudflare to your server, but without certificate validation) may be easier to configure.
- Verify the
- WARP Connection Failure:
- Symptom:
supervisorctl statusshowswarp-svcorwarp-initinFATALorEXITEDstate; or runningwarp-cli statusinside the container shows no connection. - Troubleshooting:
- Use
supervisorctl tail warp-svcandsupervisorctl tail warp-initto view detailed error logs. - Verify the
dbusservice is running (supervisorctl status dbus). WARP relies on D-Bus. - Check the output of the
warp-initscript to see ifwarp-cli registerandwarp-cli connectsucceeded. - Attempt to manually reset WARP within the container:
warp-cli --accept-tos deletefollowed bywarp-cli --accept-tos registerandwarp-cli --accept-tos connect. - Verify the container's capabilities and security options to ensure WARP has sufficient permissions for network operations.
- Use
- Symptom:
- 3X-UI Service Anomaly:
- Symptom: Unable to access the 3X-UI panel, or proxy connections fail.
- Troubleshooting:
- Use
supervisorctl status x-uito check the 3X-UI process status. - Use
supervisorctl tail x-uito view 3X-UI logs for startup or runtime errors. - Verify port mapping is correct and no other processes occupy port 2053 within the container.
- Verify the inbound rule configuration for 3X-UI is correct, especially the protocol, port, and transport settings.
- If you modified
supervisord.confor thex-uienvironment variables, restart the3xui_appcontainer or usesupervisorctl updatefollowed bysupervisorctl restart x-uito apply changes.
- Use
- Container Permission Issues:
- Symptom: Service fails to start due to insufficient permissions (e.g., "Operation not permitted" error).
- Troubleshooting:
- Review
cap_addandsecurity_optsettings indocker-compose.yml. Ensure necessary Linux capabilities (e.g.,NET_ADMIN,SYS_ADMIN) are granted. - If using
seccomp:unconfined, assess its risks and consider removing it or using a custom seccomp profile. - Check the
usersetting for each process insupervisord.confto ensure they have access to required files and directories.
- Review
- Resource Limitations and Performance Issues:
- Symptoms: Slow service response or containers killed due to insufficient memory (OOMKilled).
- Troubleshooting and Resolution:
- Set resource limits for services in
docker-compose.yml, such asdeploy.resources.limits.cpusanddeploy.resources.limits.memory. This prevents a single service from exhausting host resources. - Use
docker statsto monitor real-time CPU and memory usage of containers. - Regularly check log file sizes to ensure log rotation configurations are effective.
- Consider resource limits inherent to the ClawCloud Run platform, especially when using the free plan [4]. If resources are insufficient, consider upgrading your plan or optimizing application configurations.
- Set resource limits for services in
Through this series of configuration, monitoring, and troubleshooting strategies, we not only ensured the successful deployment of the 3X-UI service in a complex environment but also provided a solid foundation for its long-term stable operation. This systematic approach to operations is an indispensable component of any production-grade deployment. It demands that we maintain high vigilance and control—from the macro-level architectural perspective down to the micro-level log details—enabling us to swiftly pinpoint issues and implement effective countermeasures when challenges arise.
Conclusion: Deep Reflections on Building Resilient Cloud Services Through Innovation Within Constraints
This report provides an in-depth analysis of the complete technical solution for successfully deploying and maintaining stable operation of the 3X-UI proxy service management panel on ClawCloud Run—a cloud platform with specific limitations. Through meticulous analysis of environmental constraints, ingenious application of Docker containerization, deep integration of the supervisord process manager, and synergistic empowerment from Cloudflare WARP and Cloudflare Tunnel, we not only overcame platform-inherent challenges—including non-privileged environments, lack of IPv6 support, and domain binding delays—but also built a comprehensive solution integrating security, stability, and accessibility. This process was far from a simple stacking of technologies; it was a profound practice in achieving resilient deployment of complex applications under resource constraints through systematic thinking and innovative combinations.
Core Achievements and Technical Value
The core value of this deployment solution is reflected in the following dimensions:
- Systematic Methodology for Overcoming Environmental Constraints: Rather than avoiding or compromising with ClawCloud Run's constraints, we embraced them as opportunities for optimization and innovation. By decomposing challenges—permission isolation, network adaptation, service exposure—and strategically integrating technologies like Docker, WARP, and Tunnel, we developed a replicable, scalable methodology for deploying complex applications in constrained environments. This demonstrates that even under constrained resource conditions, meticulous technology selection and architectural design can deliver robust and stable services.
- In-Container Multi-Process Management: Deep Practical Application: A key highlight of the solution is using
supervisordto manage multiple interdependent system-level services (D-Bus, WARP, 3X-UI) within a single Docker container. This approach not only serves as a valuable supplement to Docker's "one container, one process" best practice—and a reasonable adaptation for specific scenarios—but also demonstrates how process managers can ensure service startup sequencing, lifecycle monitoring, and fault recovery in environments demanding higher cohesion and complex dependency management. This granular process orchestration capability is key to enhancing the robustness of containerized applications. - Synergy and Synergistic Effects within the Cloudflare Ecosystem: By seamlessly integrating Cloudflare WARP and Cloudflare Tunnel into the deployment process, we not only resolved specific network challenges but also endowed the entire system with unprecedented network resilience. WARP optimizes and protects outbound traffic privacy, while Tunnel secures and facilitates inbound access. This deep utilization of the Cloudflare ecosystem exemplifies a modern trend in cloud-native application development: leveraging specialized third-party services to compensate for platform limitations and enhance core capabilities.
- Balancing Security and Maintainability: Throughout the solution design, we consistently prioritized the equilibrium between security and maintainability. For instance: - Granting permissions in Docker containers via
cap_addinstead of indiscriminately using--privileged; - Implementing centralized log management and rotation throughsupervisordand Docker's logging mechanisms; - Ensuring service availability with health checks defined indocker-compose.yml. These details collectively form a robust yet manageable deployment unit.
Deeper Insights and Lessons
Beyond specific technical implementations, this case offers deeper insights:
- The Importance of Abstraction and Layering: Successful system design relies on effective abstraction and layering. Docker containers encapsulate the application and its runtime environment,
supervisordencapsulates process management within containers, and Cloudflare Tunnel encapsulates the complexity of network exposure. Each layer provides a simplified, dependable interface to the layer above, enabling effective management of the system's overall complexity. - Shifting from "Problem Solving" to "Experience Optimization": The initial deployment goal might have been simply "to get 3X-UI running on ClawCloud Run." But by introducing WARP and Tunnel, we not only solved the fundamental problem but also further optimized the network experience (faster, more stable outbound traffic) and access experience (more secure, more convenient inbound traffic). This shift in mindset—from meeting basic requirements to pursuing exceptional experiences—is the intrinsic force driving continuous optimization of technical solutions.
- Rethinking Platform Constraints: While platform limitations may seem like obstacles in the short term, they often spur the creation of more innovative and versatile solutions in the long run. It was precisely the constraints of ClawCloud Run that prompted us to explore and implement this deployment model, which heavily relies on containerization and third-party networking services. This approach holds potential applicability across many other cloud environments and even private servers.
Why use cloudflare warp? Because I've been researching https://github.com/masx200/warp-on-actions
So I just used cloudflare warp to solve the issue of no IPv6 for outbound traffic.
Since the Kubernetes environment lacks permissions to create virtual network interfaces, warp must be set to proxy mode!
warp-cli --accept-tos --verbose mode proxy
I've noticed that domain binding for ClawCloud Run takes far too long and remains stuck in pending status. Many users are now reporting issues with this. It might be due to excessive server load. So for now, I've had to resort to using cloudflare tunnel as a workaround.
https://linux.do/t/topic/641357/17
Is it true that using cloudflare tunnel doesn't consume ClawCloud Run traffic limits?
@masx200, please do not post AI-generated text, as you have done in https://github.com/net4people/bbs/issues/553#issue-3719317842. It is a waste of everyone's time. If you do it again, I will block your account.
You have made some reasonable statements / asked some reasonable questions:
-
为什么用cloudflare warp?因为最近在研究 https://github.com/masx200/warp-on-actions 所以顺手就用上cloudflare warp了.用来解决出站没有ipv6的问题 因为k8s环境没有权限创建虚拟网卡,所以要把warp设定为proxy模式!
warp-cli --accept-tos --verbose mode proxyWhy use cloudflare warp? Because I've been researching https://github.com/masx200/warp-on-actions So I just used cloudflare warp to solve the issue of no IPv6 for outbound traffic. Since the Kubernetes environment lacks permissions to create virtual network interfaces, warp must be set to proxy mode!
warp-cli --accept-tos --verbose mode proxy -
我发现ClawCloud Run的域名绑定时间实在是太久了一直在pending状态,最新好多人都说这个有问题了.可能是服务器压力太大了.所以只能使用cloudflare tunnel来解决了 https://linux.do/t/topic/641357/17 听说用cloudflare tunnel不消耗ClawCloud Run流量限制是真的吗?
I've noticed that domain binding for ClawCloud Run takes far too long and remains stuck in pending status. Many users are now reporting issues with this. It might be due to excessive server load. So for now, I've had to resort to using cloudflare tunnel as a workaround. https://linux.do/t/topic/641357/17 Is it true that using cloudflare tunnel doesn't consume ClawCloud Run traffic limits?
But nobody will read them, when they are buried below 64KB of "in today's highly interconnected digital era…" and similar meaningless text.
If you don't understand what you are talking about enough to explain it in your own words, then do not post it.