ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Warning: AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated

Open mikinko opened this issue 2 months ago • 5 comments

Custom Node Testing

Expected Behavior

Him, just installed pytorch 2.9 official, cuda 13. python 3.13 . clean clone of comfy, when closing server this warning

Actual Behavior

when ctrl-c to stop web server /windows got warning

Stopped server [W1018 01:28:55.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())

Steps to Reproduce

ctrl-cin terminal

Debug Logs

Checkpoint files will always be loaded safely.
Total VRAM 24576 MB, total RAM 64630 MB
pytorch version: 2.9.0+cu130
xformers version: 0.0.33+00a7a5f0.d20251018
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
working around nvidia conv3d memory bug.
Using xformers attention
Python version: 3.13.9 | packaged by conda-forge | (main, Oct 16 2025, 10:23:36) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.3.65
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
ComfyUI frontend version: 1.28.7
[Prompt Server] web root: S:\_ComfyUI_env\_env313\Lib\site-packages\comfyui_frontend_package\static

Import times for custom nodes:
   0.0 seconds: A:\ComfyUI\custom_nodes\websocket_image_save.py

Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://127.0.0.1:8188

Stopped server
[W1018 01:28:55.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())

Other

No response

mikinko avatar Oct 17 '25 23:10 mikinko

I found the same issue when debugging Megatron with fewer GPUs

I've got same error after upgrading Comfyui

40inD avatar Dec 15 '25 05:12 40inD

I've got same error after upgrading Comfyui too

SDesore avatar Dec 19 '25 17:12 SDesore

I have a GMKtec nuc box K8 Plus with oculink and purchased an Nvidia RTX 3060 12Gb in early October 2025. I have had nothing but trouble trying to install CUDA drivers.

I followed you instructions to install

$ cd ai/path

$ git clone https://github.com/comfyanonymous/ComfyUI.git

get latest updates Comfyui directory

$ git pull output: up to date

$ source venv/bin/activate

$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130

Run /ai/path/ComfyUI/manager_requirements.txt

$ pip install -r manager_requirements.txt

Run /ai/path/ComfyUI/requirements.txt

$ pip install -r requirements.txt

had to install

$ pip install einops $ pip install psutil

I was surprised ComfyUI installed CUDA drivers.

When I tried to --enable-manager, here is the terminal output

$ python3 main.py --enable-manager [START] Security scan [DONE] Security scan ** ComfyUI startup time: 2025-12-21 07:55:40.359 ** Platform: Linux ** Python version: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] ** Python executable: /ai/path/ComfyUI/venv/bin/python3 ** ComfyUI Path: /ai/path/ComfyUI ** ComfyUI Base Folder Path: /ai/path/ComfyUI ** User directory: /ai/path/ComfyUI/user ** ComfyUI-Manager config path: /ai/path/ComfyUI/user/__manager/config.ini ** Log path: /ai/path/ComfyUI/user/comfyui.log [PRE] ComfyUI-Manager Checkpoint files will always be loaded safely. Traceback (most recent call last): File "/ai/path/ComfyUI/main.py", line 177, in import execution File "/ai/path/ComfyUI/execution.py", line 15, in import comfy.model_management File "/ai/path/ComfyUI/comfy/model_management.py", line 239, in total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) ^^^^^^^^^^^^^^^^^^ File "/ai/path/ComfyUI/comfy/model_management.py", line 189, in get_torch_device return torch.device(torch.cuda.current_device()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/ai/path/ComfyUI/venv/lib/python3.12/site-packages/torch/cuda/init.py", line 1069, in current_device _lazy_init() File "/ai/path/ComfyUI/venv/lib/python3.12/site-packages/torch/cuda/init.py", line 410, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Can you: “Help Me Obi-Wan Kenobi, You're My Only Hope.”

pythonbytes avatar Dec 20 '25 23:12 pythonbytes