ComfyUI-3D-Pack icon indicating copy to clipboard operation
ComfyUI-3D-Pack copied to clipboard

TORCH_CUDA_ARCH_LIST is not set, If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].

Open zeloong2333 opened this issue 11 months ago • 2 comments

Starting ComfyUI... Adding extra search path checkpoints E:/ComfyUI-aki-v1.4/ComfyUI-aki-v1.4/models/checkpoints Adding extra search path configs path/to/stable-diffusion-webui/models/Stable-diffusion Adding extra search path vae path/to/stable-diffusion-webui/models/VAE Adding extra search path loras path/to/stable-diffusion-webui/models/Lora Adding extra search path loras path/to/stable-diffusion-webui/models/LyCORIS Adding extra search path upscale_models path/to/stable-diffusion-webui/models/ESRGAN Adding extra search path upscale_models path/to/stable-diffusion-webui/models/RealESRGAN Adding extra search path upscale_models path/to/stable-diffusion-webui/models/SwinIR Adding extra search path embeddings path/to/stable-diffusion-webui/embeddings Adding extra search path hypernetworks path/to/stable-diffusion-webui/models/hypernetworks Adding extra search path controlnet path/to/stable-diffusion-webui/models/ControlNet [START] Security scan [DONE] Security scan

ComfyUI-Manager: installing dependencies done.

** ComfyUI startup time: 2025-01-11 17:10:26.496771 ** Platform: Windows ** Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] ** Python executable: F:\ComfyUI_03\114514\Scripts\python.exe ** ComfyUI Path: F:\ComfyUI_03 ** Log path: F:\ComfyUI_03\comfyui.log

Prestartup times for custom nodes: 9.3 seconds: F:\ComfyUI_03\custom_nodes\ComfyUI-Manager

Total VRAM 6144 MB, total RAM 16263 MB pytorch version: 2.5.1+cu124 xformers version: 0.0.29 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 2060 : cudaMallocAsync Using xformers attention [Prompt Server] web root: F:\ComfyUI_03\web F:\ComfyUI_03\114514\lib\site-packages\kiui\nn_init_.py:31: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

F:\ComfyUI_03\114514\lib\site-packages\kiui\nn_init_.py:38: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, g):

Warn!: xFormers is available (Attention) Warn!: F:\ComfyUI_03\114514\lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. warnings.warn(

[SPARSE] Backend: spconv, Attention: xformers

Loading: ComfyUI-Manager (V2.55.5)

ComfyUI Version: v0.3.10-11-g82ecb02 | Released on '2024-12-29'

Import times for custom nodes: 0.0 seconds: F:\ComfyUI_03\custom_nodes\websocket_image_save.py 1.4 seconds: F:\ComfyUI_03\custom_nodes\ComfyUI-Manager 17.4 seconds: F:\ComfyUI_03\custom_nodes\ComfyUI-3D-Pack

Starting server

To see the GUI go to: http://127.0.0.1:8188 [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json

图片

"F:\ComfyUI_03\start_comfyui.bat":

@echo off echo Activating 114514 environment... call F:\ComfyUI_03\114514\Scripts\activate

REM 确定您的 GPU 架构并设置 TORCH_CUDA_ARCH_LIST 环境变量 REM 例如,对于 NVIDIA Ampere 架构(如 RTX 30 系列),使用 8.0 REM 对于 Pascal 架构(如 GTX 1080),使用 6.1 REM 对于 Volta 架构(如 Titan V),使用 7.0 set TORCH_CUDA_ARCH_LIST=7.5

REM 临时设置 Visual C++ 编译器的路径到虚拟环境的 PATH 环境变量中 set "PATH=%PATH%;D:\Visual Studio\2022\community\VC\Tools\MSVC\14.42.34433\bin\Hostx64\x64"

echo Starting ComfyUI... cd /d F:\ComfyUI_03 python main.py pause

图片

"F:\ComfyUI_03\main.py":I didn't make any changes.The following is the corresponding content

import comfy.options comfy.options.enable_args_parsing()

import os import importlib.util import folder_paths import time from comfy.cli_args import args from app.logger import setup_logger import itertools import utils.extra_config import logging

if name == "main": #NOTE: These do not do anything on core ComfyUI which should already have no communication with the internet, they are for custom nodes. os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1' os.environ['DO_NOT_TRACK'] = '1'

setup_logger(log_level=args.verbose, use_stdout=args.log_stdout)

def apply_custom_paths(): # extra model paths extra_model_paths_config_path = os.path.join(os.path.dirname(os.path.realpath(file)), "extra_model_paths.yaml") if os.path.isfile(extra_model_paths_config_path): utils.extra_config.load_extra_path_config(extra_model_paths_config_path)

if args.extra_model_paths_config:
    for config_path in itertools.chain(*args.extra_model_paths_config):
        utils.extra_config.load_extra_path_config(config_path)

# --output-directory, --input-directory, --user-directory
if args.output_directory:
    output_dir = os.path.abspath(args.output_directory)
    logging.info(f"Setting output directory to: {output_dir}")
    folder_paths.set_output_directory(output_dir)

# These are the default folders that checkpoints, clip and vae models will be saved to when using CheckpointSave, etc.. nodes
folder_paths.add_model_folder_path("checkpoints", os.path.join(folder_paths.get_output_directory(), "checkpoints"))
folder_paths.add_model_folder_path("clip", os.path.join(folder_paths.get_output_directory(), "clip"))
folder_paths.add_model_folder_path("vae", os.path.join(folder_paths.get_output_directory(), "vae"))
folder_paths.add_model_folder_path("diffusion_models",
                                   os.path.join(folder_paths.get_output_directory(), "diffusion_models"))
folder_paths.add_model_folder_path("loras", os.path.join(folder_paths.get_output_directory(), "loras"))

if args.input_directory:
    input_dir = os.path.abspath(args.input_directory)
    logging.info(f"Setting input directory to: {input_dir}")
    folder_paths.set_input_directory(input_dir)

if args.user_directory:
    user_dir = os.path.abspath(args.user_directory)
    logging.info(f"Setting user directory to: {user_dir}")
    folder_paths.set_user_directory(user_dir)

def execute_prestartup_script(): def execute_script(script_path): module_name = os.path.splitext(script_path)[0] try: spec = importlib.util.spec_from_file_location(module_name, script_path) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) return True except Exception as e: logging.error(f"Failed to execute startup-script: {script_path} / {e}") return False

if args.disable_all_custom_nodes:
    return

node_paths = folder_paths.get_folder_paths("custom_nodes")
for custom_node_path in node_paths:
    possible_modules = os.listdir(custom_node_path)
    node_prestartup_times = []

    for possible_module in possible_modules:
        module_path = os.path.join(custom_node_path, possible_module)
        if os.path.isfile(module_path) or module_path.endswith(".disabled") or module_path == "__pycache__":
            continue

        script_path = os.path.join(module_path, "prestartup_script.py")
        if os.path.exists(script_path):
            time_before = time.perf_counter()
            success = execute_script(script_path)
            node_prestartup_times.append((time.perf_counter() - time_before, module_path, success))
if len(node_prestartup_times) > 0:
    logging.info("\nPrestartup times for custom nodes:")
    for n in sorted(node_prestartup_times):
        if n[2]:
            import_message = ""
        else:
            import_message = " (PRESTARTUP FAILED)"
        logging.info("{:6.1f} seconds{}: {}".format(n[0], import_message, n[1]))
    logging.info("")

apply_custom_paths() execute_prestartup_script()

Main code

import asyncio import shutil import threading import gc

if os.name == "nt": logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())

if name == "main": if args.cuda_device is not None: os.environ['CUDA_VISIBLE_DEVICES'] = str(args.cuda_device) os.environ['HIP_VISIBLE_DEVICES'] = str(args.cuda_device) logging.info("Set cuda device to: {}".format(args.cuda_device))

if args.oneapi_device_selector is not None:
    os.environ['ONEAPI_DEVICE_SELECTOR'] = args.oneapi_device_selector
    logging.info("Set oneapi device selector to: {}".format(args.oneapi_device_selector))

if args.deterministic:
    if 'CUBLAS_WORKSPACE_CONFIG' not in os.environ:
        os.environ['CUBLAS_WORKSPACE_CONFIG'] = ":4096:8"

import cuda_malloc

if args.windows_standalone_build: try: from fix_torch import fix_pytorch_libomp fix_pytorch_libomp() except: pass

import comfy.utils

import execution import server from server import BinaryEventTypes import nodes import comfy.model_management

def cuda_malloc_warning(): device = comfy.model_management.get_torch_device() device_name = comfy.model_management.get_torch_device_name(device) cuda_malloc_warning = False if "cudaMallocAsync" in device_name: for b in cuda_malloc.blacklist: if b in device_name: cuda_malloc_warning = True if cuda_malloc_warning: logging.warning("\nWARNING: this card most likely does not support cuda-malloc, if you get "CUDA error" please run ComfyUI with: --disable-cuda-malloc\n")

def prompt_worker(q, server_instance): current_time: float = 0.0 e = execution.PromptExecutor(server_instance, lru_size=args.cache_lru) last_gc_collect = 0 need_gc = False gc_collect_interval = 10.0

while True:
    timeout = 1000.0
    if need_gc:
        timeout = max(gc_collect_interval - (current_time - last_gc_collect), 0.0)

    queue_item = q.get(timeout=timeout)
    if queue_item is not None:
        item, item_id = queue_item
        execution_start_time = time.perf_counter()
        prompt_id = item[1]
        server_instance.last_prompt_id = prompt_id

        e.execute(item[2], prompt_id, item[3], item[4])
        need_gc = True
        q.task_done(item_id,
                    e.history_result,
                    status=execution.PromptQueue.ExecutionStatus(
                        status_str='success' if e.success else 'error',
                        completed=e.success,
                        messages=e.status_messages))
        if server_instance.client_id is not None:
            server_instance.send_sync("executing", {"node": None, "prompt_id": prompt_id}, server_instance.client_id)

        current_time = time.perf_counter()
        execution_time = current_time - execution_start_time
        logging.info("Prompt executed in {:.2f} seconds".format(execution_time))

    flags = q.get_flags()
    free_memory = flags.get("free_memory", False)

    if flags.get("unload_models", free_memory):
        comfy.model_management.unload_all_models()
        need_gc = True
        last_gc_collect = 0

    if free_memory:
        e.reset()
        need_gc = True
        last_gc_collect = 0

    if need_gc:
        current_time = time.perf_counter()
        if (current_time - last_gc_collect) > gc_collect_interval:
            gc.collect()
            comfy.model_management.soft_empty_cache()
            last_gc_collect = current_time
            need_gc = False

async def run(server_instance, address='', port=8188, verbose=True, call_on_start=None): addresses = [] for addr in address.split(","): addresses.append((addr, port)) await asyncio.gather(server_instance.start_multi_address(addresses, call_on_start), server_instance.publish_loop())

def hijack_progress(server_instance): def hook(value, total, preview_image): comfy.model_management.throw_exception_if_processing_interrupted() progress = {"value": value, "max": total, "prompt_id": server_instance.last_prompt_id, "node": server_instance.last_node_id}

    server_instance.send_sync("progress", progress, server_instance.client_id)
    if preview_image is not None:
        server_instance.send_sync(BinaryEventTypes.UNENCODED_PREVIEW_IMAGE, preview_image, server_instance.client_id)

comfy.utils.set_progress_bar_global_hook(hook)

def cleanup_temp(): temp_dir = folder_paths.get_temp_directory() if os.path.exists(temp_dir): shutil.rmtree(temp_dir, ignore_errors=True)

def start_comfyui(asyncio_loop=None): """ Starts the ComfyUI server using the provided asyncio event loop or creates a new one. Returns the event loop, server instance, and a function to start the server asynchronously. """ if args.temp_directory: temp_dir = os.path.join(os.path.abspath(args.temp_directory), "temp") logging.info(f"Setting temp directory to: {temp_dir}") folder_paths.set_temp_directory(temp_dir) cleanup_temp()

if args.windows_standalone_build:
    try:
        import new_updater
        new_updater.update_windows_updater()
    except:
        pass

if not asyncio_loop:
    asyncio_loop = asyncio.new_event_loop()
    asyncio.set_event_loop(asyncio_loop)
prompt_server = server.PromptServer(asyncio_loop)
q = execution.PromptQueue(prompt_server)

nodes.init_extra_nodes(init_custom_nodes=not args.disable_all_custom_nodes)

cuda_malloc_warning()

prompt_server.add_routes()
hijack_progress(prompt_server)

threading.Thread(target=prompt_worker, daemon=True, args=(q, prompt_server,)).start()

if args.quick_test_for_ci:
    exit(0)

os.makedirs(folder_paths.get_temp_directory(), exist_ok=True)
call_on_start = None
if args.auto_launch:
    def startup_server(scheme, address, port):
        import webbrowser
        if os.name == 'nt' and address == '0.0.0.0':
            address = '127.0.0.1'
        if ':' in address:
            address = "[{}]".format(address)
        webbrowser.open(f"{scheme}://{address}:{port}")
    call_on_start = startup_server

async def start_all():
    await prompt_server.setup()
    await run(prompt_server, address=args.listen, port=args.port, verbose=not args.dont_print_server, call_on_start=call_on_start)

# Returning these so that other code can integrate with the ComfyUI loop and server
return asyncio_loop, prompt_server, start_all

if name == "main": # Running directly, just start ComfyUI. event_loop, _, start_all_func = start_comfyui() try: event_loop.run_until_complete(start_all_func()) except KeyboardInterrupt: logging.info("\nStopped server")

cleanup_temp()

图片

图片

"F:\ComfyUI_03\114514\Lib\site-packages\torch\utils_cpp_extension_versioner.py"

mypy: allow-untyped-defs

import collections

Entry = collections.namedtuple('Entry', 'version, hash')

def update_hash(seed, value): # Good old boost::hash_combine # https://www.boost.org/doc/libs/1_35_0/doc/html/boost/hash_combine_id241013.html return seed ^ (hash(value) + 0x9e3779b9 + (seed << 6) + (seed >> 2))

def hash_source_files(hash_value, source_files): for filename in source_files: with open(filename) as file: hash_value = update_hash(hash_value, file.read()) return hash_value

def hash_build_arguments(hash_value, build_arguments): for group in build_arguments: if group: for argument in group: hash_value = update_hash(hash_value, argument) return hash_value

class ExtensionVersioner: def init(self): self.entries = {}

def get_version(self, name):
    entry = self.entries.get(name)
    return None if entry is None else entry.version

def bump_version_if_changed(self,
                            name,
                            source_files,
                            build_arguments,
                            build_directory,
                            with_cuda,
                            is_python_module,
                            is_standalone):
    hash_value = 0
    hash_value = hash_source_files(hash_value, source_files)
    hash_value = hash_build_arguments(hash_value, build_arguments)
    hash_value = update_hash(hash_value, build_directory)
    hash_value = update_hash(hash_value, with_cuda)
    hash_value = update_hash(hash_value, is_python_module)
    hash_value = update_hash(hash_value, is_standalone)

    entry = self.entries.get(name)
    if entry is None:
        self.entries[name] = entry = Entry(0, hash_value)
    elif hash_value != entry.hash:
        self.entries[name] = entry = Entry(entry.version + 1, hash_value)

    return entry.version

图片

图片

Sorry, maybe the evidence I provided is disorganized. I tried to add this environment variable to my virtual environment or system environment, but it showed that this environment variable is required.I have tried modifying the startup script or main.py file of comfyUI, but neither of them succeeded.

zeloong2333 avatar Jan 11 '25 10:01 zeloong2333

Try set TORCH_CUDA_ARCH_LIST="7.5" yea, WITH quotes.

iiiCpu avatar Jan 17 '25 07:01 iiiCpu

The same question.

C:\Windows\System32>mamba activate 3d

(3d) D:+AI\myComfyUI>python -c "import torch; print(torch.cuda.get_device_capability())" (8, 9)

(3d) D:+AI\myComfyUI>set TORCH_CUDA_ARCH_LIST=8.9

(3d) D:+AI\myComfyUI>python -c "import nvdiffrast.torch as dr;print(dr);glctx = dr.RasterizeGLContext()" <module 'nvdiffrast.torch' from 'D:\+AI\miniconda3\envs\3d\Lib\site-packages\nvdiffrast\torch\init.py'> D:+AI\miniconda3\envs\3d\Lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. warnings.warn(

(3d) D:+AI\myComfyUI>python -c "import nvdiffrast.torch as dr;print(dr);glctx = dr.RasterizeCudaContext()" <module 'nvdiffrast.torch' from 'D:\+AI\miniconda3\envs\3d\Lib\site-packages\nvdiffrast\torch\init.py'> D:+AI\miniconda3\envs\3d\Lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. warnings.warn(

(3d) D:+AI\myComfyUI>mamba list nvdiffrast DEPRECATION: --no-python-version-warning is deprecated. pip 25.1 will enforce this behaviour change. A possible replace4 List of packages in environment: "D:\+AI\miniconda3\envs\3d"

nvdiffrast 0.3.3 pypi_0 pypi

(3d) D:+AI\myComfyUI>python -c "import os;_arch_list = os.environ.get('TORCH_CUDA_ARCH_LIST', None);print(_arch_list);print('TORCH_CUDA_ARCH_LIST is ready.' if _arch_list else 'TORCH_CUDA_ARCH_LIST is empty.')" 8.9 TORCH_CUDA_ARCH_LIST is ready.

(3d) D:+AI\myComfyUI>

aswordok avatar Feb 27 '25 01:02 aswordok

If you're starting ComfyUI with a .bat file, make sure to add set TORCH_CUDA_ARCH_LIST=8.9 directly inside that file.

That said, I wouldn't recommend using .bat files — they give you less control over the environment compared to launching from a proper terminal or script.

DenisKochetov avatar Aug 07 '25 09:08 DenisKochetov