ComfyUI_ExtraModels icon indicating copy to clipboard operation
ComfyUI_ExtraModels copied to clipboard

where to put gemma loader's model , huggingface cache and in model not work,

Open xueqing0622 opened this issue 1 year ago • 0 comments

use this brach: https://github.com/Efficient-Large-Model/ComfyUI_ExtraModels/ where to put gemma loader's model , huggingface cache and in model not work, I:\cache\huggingface\hub\models--unsloth--gemma-2-2b-it-bnb-4bit F:\ComfyUI\ComfyUI\models\text_encoders\unsloth\gemma-2-2b-it-bnb-4bit

Prompt executed in 80.75 seconds got prompt Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. low_cpu_mem_usage was None, now set to True since model is quantized. !!! Exception during processing !!! unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. Traceback (most recent call last): File "F:\ComfyUI\ComfyUI\execution.py", line 328, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 203, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Gemma\nodes.py", line 63, in load_model text_encoder_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3708, in from_pretrained raise EnvironmentError( OSError: unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.

unsloth/gemma-2-2b-it-bnb-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. and gemma loader so slow: low_cpu_mem_usage was None, now set to True since model is quantized.

xueqing0622 avatar Dec 17 '24 06:12 xueqing0622