OmniParser icon indicating copy to clipboard operation
OmniParser copied to clipboard

Unable to start the service

Open BiFangKNT opened this issue 9 months ago • 20 comments

(omni) C:\github\OmniParser>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:42:46_Pacific_Standard_Time_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0

(omni) C:\github\OmniParser>python gradio_demo.py
Florence2LanguageForConditionalGeneration has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
* Running on local URL:  http://0.0.0.0:7861
2025/03/14 17:12:05 [W] [service.go:132] login to server failed: session shutdown

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.

BiFangKNT avatar Mar 14 '25 09:03 BiFangKNT

所以你啥问题?我看你日志里写的已经启动了啊

* Running on local URL:  http://0.0.0.0:7861

访问一下http://127.0.0.1:7861 frpc启动失败是正常的

StevenLi-phoenix avatar Mar 15 '25 21:03 StevenLi-phoenix

所以你啥问题?我看你日志里写的已经启动了啊

  • Running on local URL: http://0.0.0.0:7861 访问一下http://127.0.0.1:7861 frpc启动失败是正常的

页面一片空白。是不是不支持win10?

BiFangKNT avatar Mar 16 '25 05:03 BiFangKNT

  1. 确实没在windows上跑起来 我这里的主要问题是PaddleOCR的加载路径写的是linux风格的 在wsl上跑起来了

  2. 不过gradio框架会是正常显示,也就是说html的页面是正常的

StevenLi-phoenix avatar Mar 16 '25 05:03 StevenLi-phoenix

  1. 确实没在windows上跑起来 我这里的主要问题是PaddleOCR的加载路径写的是linux风格的 在wsl上跑起来了

  2. 不过gradio框架会是正常显示,也就是说html的页面是正常的

我补充一下,之前说的空白,过会儿就返回网页错误了,也就是说这个端口根本没提供web页面,服务根本没成功起来

BiFangKNT avatar Mar 16 '25 05:03 BiFangKNT

能提供一下Python版本号吗? 我看能不能复现。 我再次检查了一遍除了paddleocr我还是有问题以外(我注释掉了这部分),gradio的页面是没问题的

Image

Image

StevenLi-phoenix avatar Mar 16 '25 06:03 StevenLi-phoenix

能提供一下Python版本号吗? 我看能不能复现。 我再次检查了一遍除了paddleocr我还是有问题以外(我注释掉了这部分),gradio的页面是没问题的

Image

Image

现在机器不在身边,我明天再看看吧

不过我是完全按照readme流程安装的,应该是3.12吧

BiFangKNT avatar Mar 16 '25 06:03 BiFangKNT

我的建议是用WSL,好像paddle和pytorch都需要单独处理。 linux上这两个包似乎都是自动安装GPU版本的,然后windows还有中文文件夹名的问题。。。

torch 需要用 https://pytorch.org/get-started/locally/

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

paddle 需要用 https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/windows-pip.html

pip install paddlepaddle-gpu==2.6.2 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
pip install paddleocr

然后如果你用户名是中文的话路径也需要手动指定本地不含中文的路径

StevenLi-phoenix avatar Mar 16 '25 07:03 StevenLi-phoenix

我的建议是用WSL,好像paddle和pytorch都需要单独处理。 linux上这两个包似乎都是自动安装GPU版本的,然后windows还有中文文件夹名的问题。。。

torch 需要用 https://pytorch.org/get-started/locally/

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 paddle 需要用 https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/windows-pip.html

pip install paddlepaddle-gpu==2.6.2 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ pip install paddleocr 然后如果你用户名是中文的话路径也需要手动指定本地不含中文的路径

啊,,这么麻烦,要是有docker镜像就好了

我是默认的管理员账号,英文的

BiFangKNT avatar Mar 16 '25 07:03 BiFangKNT

@StevenLi-phoenix cuda是不是最高安装12.3版本?我看paddle好像最高支持12.3,我装的是12.8

torch装的也是12.8的:

pip uninstall torch torchvision torchaudio
Found existing installation: torch 2.7.0.dev20250311+cu128
Uninstalling torch-2.7.0.dev20250311+cu128:
  Would remove:
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\functorch\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torch-2.7.0.dev20250311+cu128.dist-info\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torch\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torchgen\*
    c:\users\administrator\miniconda3\envs\omni\scripts\torchfrtrace.exe
    c:\users\administrator\miniconda3\envs\omni\scripts\torchrun.exe
Proceed (Y/n)? n
Found existing installation: torchvision 0.22.0.dev20250312+cu128
Uninstalling torchvision-0.22.0.dev20250312+cu128:
  Would remove:
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torchvision-0.22.0.dev20250312+cu128.dist-info\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torchvision\*
Proceed (Y/n)? n
Found existing installation: torchaudio 2.6.0.dev20250312+cu128
Uninstalling torchaudio-2.6.0.dev20250312+cu128:
  Would remove:
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torchaudio-2.6.0.dev20250312+cu128.dist-info\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torchaudio\*
    c:\users\administrator\miniconda3\envs\omni\lib\site-packages\torio\*
Proceed (Y/n)? n

BiFangKNT avatar Mar 17 '25 01:03 BiFangKNT

你安装最新版的pytorch没问题,向下兼容 主要看cuda版本,pytorch版本跟着cuda版本就没啥问题

StevenLi-phoenix avatar Mar 17 '25 01:03 StevenLi-phoenix

你安装最新版的pytorch没问题,向下兼容 主要看cuda版本,pytorch版本跟着cuda版本就没啥问题

那我安装12.3的cuda就行了?paddle好像最高支持12.3

BiFangKNT avatar Mar 17 '25 01:03 BiFangKNT

如果你有已经安装的CUDA和对应版本的Pytorch(你的例子里是12.8>12.3)。你就不需要动cuda和pytorch 根据nvidia的兼容性:https://docs.nvidia.com/deploy/cuda-compatibility/index.html 直接安装paddle12.3可以运行

StevenLi-phoenix avatar Mar 17 '25 01:03 StevenLi-phoenix

如果你有已经安装的CUDA和对应版本的Pytorch(你的例子里是12.8>12.3)。你就不需要动cuda和pytorch 根据nvidia的兼容性:https://docs.nvidia.com/deploy/cuda-compatibility/index.html 直接安装paddle12.3可以运行

python gradio_demo.py
信息: 用提供的模式无法找到文件。
C:\Users\Administrator\miniconda3\envs\omni\Lib\site-packages\paddle\utils\cpp_extension\extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
  warnings.warn(warning_message)
Traceback (most recent call last):
  File "C:\github\OmniParser\gradio_demo.py", line 11, in <module>
    from util.utils import check_ocr_box, get_yolo_model, get_caption_model_processor, get_som_labeled_img
  File "C:\github\OmniParser\util\utils.py", line 21, in <module>
    from paddleocr import PaddleOCR
  File "C:\Users\Administrator\miniconda3\envs\omni\Lib\site-packages\paddleocr\__init__.py", line 14, in <module>
    from .paddleocr import (
  File "C:\Users\Administrator\miniconda3\envs\omni\Lib\site-packages\paddleocr\paddleocr.py", line 21, in <module>
    from paddle.utils import try_import
  File "C:\Users\Administrator\miniconda3\envs\omni\Lib\site-packages\paddle\__init__.py", line 751, in <module>
    raise err
OSError: [WinError 127] 找不到指定的程序。 Error loading "C:\Users\Administrator\miniconda3\envs\omni\Lib\site-packages\paddle\..\nvidia\cudnn\bin\cudnn_cnn64_9.dll" or one of its dependencies.

该目录下根本没有nvidia这个文件夹:

Image

也根本没这个dll文件:

Image

是项目和paddle冲突了?

BiFangKNT avatar Mar 17 '25 01:03 BiFangKNT

@StevenLi-phoenix wsl中手动安装了torch 12.8后,安装paddle报了这个:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch 2.8.0.dev20250315+cu128 requires nvidia-cublas-cu12==12.8.3.14; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cublas-cu12 12.3.4.1 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-cupti-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-cupti-cu12 12.3.101 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-nvrtc-cu12==12.8.61; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-nvrtc-cu12 12.3.107 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-runtime-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-runtime-cu12 12.3.101 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cudnn-cu12==9.8.0.87; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cudnn-cu12 9.1.1.17 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cufft-cu12==11.3.3.41; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cufft-cu12 11.2.1.3 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-curand-cu12==10.3.9.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-curand-cu12 10.3.5.147 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cusolver-cu12==11.7.2.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusolver-cu12 11.6.1.9 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-cusparse-cu12==12.5.7.53; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusparse-cu12 12.3.1.170 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-nccl-cu12==2.25.1; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nccl-cu12 2.19.3 which is incompatible.
torch 2.8.0.dev20250315+cu128 requires nvidia-nvtx-cu12==12.8.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nvtx-cu12 12.4.127 which is incompatible.
Successfully installed nvidia-cublas-cu12-12.3.4.1 nvidia-cuda-cupti-cu12-12.3.101 nvidia-cuda-nvrtc-cu12-12.3.107 nvidia-cuda-runtime-cu12-12.3.101 nvidia-cudnn-cu12-9.1.1.17 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-nccl-cu12-2.19.3 nvidia-nvtx-cu12-12.4.127 paddlepaddle-gpu-3.0.0rc1

感觉torch版本得跟paddle走?

BiFangKNT avatar Mar 17 '25 03:03 BiFangKNT

WSL 中不需要手动安装 Linux下直接跟着 @readme.md 走一遍就行 Windows native里需要手动

On Mar 16, 2025, at 23:45, 彼方 @.***> wrote:

BiFangKNT left a comment (microsoft/OmniParser#258) @StevenLi-phoenix wsl中手动安装了torch 12.8后,安装paddle报了这个:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torch 2.8.0.dev20250315+cu128 requires nvidia-cublas-cu12==12.8.3.14; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cublas-cu12 12.3.4.1 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-cupti-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-cupti-cu12 12.3.101 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-nvrtc-cu12==12.8.61; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-nvrtc-cu12 12.3.107 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-runtime-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-runtime-cu12 12.3.101 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cudnn-cu12==9.8.0.87; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cudnn-cu12 9.1.1.17 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cufft-cu12==11.3.3.41; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cufft-cu12 11.2.1.3 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-curand-cu12==10.3.9.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-curand-cu12 10.3.5.147 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cusolver-cu12==11.7.2.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusolver-cu12 11.6.1.9 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cusparse-cu12==12.5.7.53; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusparse-cu12 12.3.1.170 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-nccl-cu12==2.25.1; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nccl-cu12 2.19.3 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-nvtx-cu12==12.8.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nvtx-cu12 12.4.127 which is incompatible. Successfully installed nvidia-cublas-cu12-12.3.4.1 nvidia-cuda-cupti-cu12-12.3.101 nvidia-cuda-nvrtc-cu12-12.3.107 nvidia-cuda-runtime-cu12-12.3.101 nvidia-cudnn-cu12-9.1.1.17 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-nccl-cu12-2.19.3 nvidia-nvtx-cu12-12.4.127 paddlepaddle-gpu-3.0.0rc1 感觉torch版本得跟paddle走?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

https://github.com/StevenLi-phoenix https://github.com/microsoft/OmniParser/issues/258#issuecomment-2728051501 https://github.com/notifications/unsubscribe-auth/ASCVDZZCKJ2S7ZKAVC6K3532UZAOZAVCNFSM6AAAAABZAIYW4WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYGA2TCNJQGE

BiFangKNT left a comment (microsoft/OmniParser#258) https://github.com/microsoft/OmniParser/issues/258#issuecomment-2728051501 @StevenLi-phoenix https://github.com/StevenLi-phoenix wsl中手动安装了torch 12.8后,安装paddle报了这个:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torch 2.8.0.dev20250315+cu128 requires nvidia-cublas-cu12==12.8.3.14; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cublas-cu12 12.3.4.1 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-cupti-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-cupti-cu12 12.3.101 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-nvrtc-cu12==12.8.61; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-nvrtc-cu12 12.3.107 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cuda-runtime-cu12==12.8.57; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cuda-runtime-cu12 12.3.101 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cudnn-cu12==9.8.0.87; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cudnn-cu12 9.1.1.17 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cufft-cu12==11.3.3.41; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cufft-cu12 11.2.1.3 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-curand-cu12==10.3.9.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-curand-cu12 10.3.5.147 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cusolver-cu12==11.7.2.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusolver-cu12 11.6.1.9 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-cusparse-cu12==12.5.7.53; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-cusparse-cu12 12.3.1.170 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-nccl-cu12==2.25.1; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nccl-cu12 2.19.3 which is incompatible. torch 2.8.0.dev20250315+cu128 requires nvidia-nvtx-cu12==12.8.55; platform_system == "Linux" and platform_machine == "x86_64", but you have nvidia-nvtx-cu12 12.4.127 which is incompatible. Successfully installed nvidia-cublas-cu12-12.3.4.1 nvidia-cuda-cupti-cu12-12.3.101 nvidia-cuda-nvrtc-cu12-12.3.107 nvidia-cuda-runtime-cu12-12.3.101 nvidia-cudnn-cu12-9.1.1.17 nvidia-cufft-cu12-11.2.1.3 nvidia-curand-cu12-10.3.5.147 nvidia-cusolver-cu12-11.6.1.9 nvidia-cusparse-cu12-12.3.1.170 nvidia-nccl-cu12-2.19.3 nvidia-nvtx-cu12-12.4.127 paddlepaddle-gpu-3.0.0rc1 感觉torch版本得跟paddle走?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/OmniParser/issues/258#issuecomment-2728051501, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASCVDZZCKJ2S7ZKAVC6K3532UZAOZAVCNFSM6AAAAABZAIYW4WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYGA2TCNJQGE. You are receiving this because you were mentioned.

StevenLi-phoenix avatar Mar 17 '25 04:03 StevenLi-phoenix

@StevenLi-phoenix 教程这里的docker能装在windows下吗?在wsl里套dockers感觉性能会很差

BiFangKNT avatar Mar 18 '25 03:03 BiFangKNT

@StevenLi-phoenix 我在wsl中安装docker就会这样:

sudo docker logs omni-windows
❯ Starting OmniParser Windows for Docker v0.00...
❯ For support visit https://github.com/microsoft/OmniParser
❯ CPU: AMD Ryzen 5 7500F 6 Core | RAM: 14/16 GB | DISK: 929 GB (ext4) | KERNEL: 5.15.167.4-microsoft-standard-WSL2...

❯ ERROR: KVM acceleration not available (no write access), this will cause a major loss of performance.
❯ ERROR: See the FAQ on how to diagnose the cause, or continue without KVM by setting KVM=N (not recommended).

这个KVM死活启用不了,是不是需要wsl开启嵌套虚拟化?可是这个我也似乎开不了,bios里打开了SVM Mode的

BiFangKNT avatar Mar 18 '25 07:03 BiFangKNT

这个toolbox我真不清楚来自李树雨的iPhoneOn Mar 18, 2025, at 3:59 AM, 彼方 @.***> wrote: @StevenLi-phoenix 我在wsl中安装docker就会这样: sudo docker logs omni-windows ❯ Starting OmniParser Windows for Docker v0.00... ❯ For support visit https://github.com/microsoft/OmniParser ❯ CPU: AMD Ryzen 5 7500F 6 Core | RAM: 14/16 GB | DISK: 929 GB (ext4) | KERNEL: 5.15.167.4-microsoft-standard-WSL2...

❯ ERROR: KVM acceleration not available (no write access), this will cause a major loss of performance. ❯ ERROR: See the FAQ on how to diagnose the cause, or continue without KVM by setting KVM=N (not recommended).

这个KVM死活启用不了,是不是需要wsl开启嵌套虚拟化?可是这个我也似乎开不了,bios里打开了SVM Mode的—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

BiFangKNT left a comment (microsoft/OmniParser#258) @StevenLi-phoenix 我在wsl中安装docker就会这样: sudo docker logs omni-windows ❯ Starting OmniParser Windows for Docker v0.00... ❯ For support visit https://github.com/microsoft/OmniParser ❯ CPU: AMD Ryzen 5 7500F 6 Core | RAM: 14/16 GB | DISK: 929 GB (ext4) | KERNEL: 5.15.167.4-microsoft-standard-WSL2...

❯ ERROR: KVM acceleration not available (no write access), this will cause a major loss of performance. ❯ ERROR: See the FAQ on how to diagnose the cause, or continue without KVM by setting KVM=N (not recommended).

这个KVM死活启用不了,是不是需要wsl开启嵌套虚拟化?可是这个我也似乎开不了,bios里打开了SVM Mode的

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

StevenLi-phoenix avatar Mar 19 '25 05:03 StevenLi-phoenix

你好。 这两点启动了吗?

Image

Shuu-Ri avatar May 09 '25 02:05 Shuu-Ri

你好。 这两点启动了吗?

Image

Image

没有第一项

BiFangKNT avatar May 09 '25 03:05 BiFangKNT