Warning: AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
Him, just installed pytorch 2.9 official, cuda 13. python 3.13 . clean clone of comfy, when closing server this warning
Actual Behavior
when ctrl-c to stop web server /windows got warning
Stopped server [W1018 01:28:55.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())
Steps to Reproduce
ctrl-cin terminal
Debug Logs
Checkpoint files will always be loaded safely.
Total VRAM 24576 MB, total RAM 64630 MB
pytorch version: 2.9.0+cu130
xformers version: 0.0.33+00a7a5f0.d20251018
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
working around nvidia conv3d memory bug.
Using xformers attention
Python version: 3.13.9 | packaged by conda-forge | (main, Oct 16 2025, 10:23:36) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.3.65
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
ComfyUI frontend version: 1.28.7
[Prompt Server] web root: S:\_ComfyUI_env\_env313\Lib\site-packages\comfyui_frontend_package\static
Import times for custom nodes:
0.0 seconds: A:\ComfyUI\custom_nodes\websocket_image_save.py
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server
To see the GUI go to: http://127.0.0.1:8188
Stopped server
[W1018 01:28:55.000000000 AllocatorConfig.cpp:28] Warning: PYTORCH_CUDA_ALLOC_CONF is deprecated, use PYTORCH_ALLOC_CONF instead (function operator ())
Other
No response
I found the same issue when debugging Megatron with fewer GPUs
I've got same error after upgrading Comfyui
I've got same error after upgrading Comfyui too
I have a GMKtec nuc box K8 Plus with oculink and purchased an Nvidia RTX 3060 12Gb in early October 2025. I have had nothing but trouble trying to install CUDA drivers.
I followed you instructions to install
$ cd ai/path
$ git clone https://github.com/comfyanonymous/ComfyUI.git
get latest updates Comfyui directory
$ git pull output: up to date
$ source venv/bin/activate
$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
Run /ai/path/ComfyUI/manager_requirements.txt
$ pip install -r manager_requirements.txt
Run /ai/path/ComfyUI/requirements.txt
$ pip install -r requirements.txt
had to install
$ pip install einops $ pip install psutil
I was surprised ComfyUI installed CUDA drivers.
When I tried to --enable-manager, here is the terminal output
$ python3 main.py --enable-manager
[START] Security scan
[DONE] Security scan
** ComfyUI startup time: 2025-12-21 07:55:40.359
** Platform: Linux
** Python version: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
** Python executable: /ai/path/ComfyUI/venv/bin/python3
** ComfyUI Path: /ai/path/ComfyUI
** ComfyUI Base Folder Path: /ai/path/ComfyUI
** User directory: /ai/path/ComfyUI/user
** ComfyUI-Manager config path: /ai/path/ComfyUI/user/__manager/config.ini
** Log path: /ai/path/ComfyUI/user/comfyui.log
[PRE] ComfyUI-Manager
Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File "/ai/path/ComfyUI/main.py", line 177, in