Florence2 image to rompt advance gives me:LayerUtility: Florence2Image2Prompt 'NoneType' object is not callable
Expected Behavior
For florence2 image to prompt advance to work and give me a text from the image?
Actual Behavior
LayerUtility: Florence2Image2Prompt
'NoneType' object is not callable
Steps to Reproduce
press queue prompt
Debug Logs
## ComfyUI-Manager: installing dependencies done.
[2025-03-25 20:34:23.808] ** ComfyUI startup time: 2025-03-25 20:34:23.808
[2025-03-25 20:34:23.809] ** Platform: Windows
[2025-03-25 20:34:23.809] ** Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)]
[2025-03-25 20:34:23.810] ** Python executable: D:\ComfyUI_windows_portable\python_embeded\python.exe
[2025-03-25 20:34:23.811] ** ComfyUI Path: D:\ComfyUI_windows_portable\ComfyUI
[2025-03-25 20:34:23.811] ** ComfyUI Base Folder Path: D:\ComfyUI_windows_portable\ComfyUI
[2025-03-25 20:34:23.812] ** User directory: D:\ComfyUI_windows_portable\ComfyUI\user
[2025-03-25 20:34:23.813] ** ComfyUI-Manager config path: D:\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini
[2025-03-25 20:34:23.814] ** Log path: D:\ComfyUI_windows_portable\ComfyUI\user\comfyui.log
Prestartup times for custom nodes:
[2025-03-25 20:34:25.685] 4.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
[2025-03-25 20:34:25.685]
[2025-03-25 20:34:27.422] Checkpoint files will always be loaded safely.
[2025-03-25 20:34:27.593] Total VRAM 12288 MB, total RAM 65448 MB
[2025-03-25 20:34:27.594] pytorch version: 2.6.0+cu126
[2025-03-25 20:34:27.595] Set vram state to: NORMAL_VRAM
[2025-03-25 20:34:27.595] Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
[2025-03-25 20:34:28.856] Using pytorch attention
[2025-03-25 20:34:30.022] ComfyUI version: 0.3.27
[2025-03-25 20:34:30.045] ComfyUI frontend version: 1.14.5
[2025-03-25 20:34:30.046] [Prompt Server] web root: D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
[2025-03-25 20:34:31.125] [Crystools [0;32mINFO[0m] Crystools version: 1.22.1
[2025-03-25 20:34:31.151] [Crystools [0;32mINFO[0m] CPU: AMD Ryzen 7 5800X 8-Core Processor - Arch: AMD64 - OS: Windows 10
[2025-03-25 20:34:31.161] [Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
[2025-03-25 20:34:31.162] [Crystools [0;32mINFO[0m] GPU/s:
[2025-03-25 20:34:31.176] [Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 3060
[2025-03-25 20:34:31.177] [Crystools [0;32mINFO[0m] NVIDIA Driver: 572.83
[2025-03-25 20:34:33.606]
[34m[Griptape Custom Nodes]:[0m
[2025-03-25 20:34:33.608] [34m- Custom routes initialized.[0m
[2025-03-25 20:34:33.608] [34m- [92mDone![0m
[2025-03-25 20:34:33.609]
[2025-03-25 20:34:33.636] Total VRAM 12288 MB, total RAM 65448 MB
[2025-03-25 20:34:33.637] pytorch version: 2.6.0+cu126
[2025-03-25 20:34:33.638] Set vram state to: NORMAL_VRAM
[2025-03-25 20:34:33.639] Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
[2025-03-25 20:34:33.686] Found LoRA roots:
- D:/ComfyUI_windows_portable/ComfyUI/models/loras
[2025-03-25 20:34:33.705] Added static route /loras_static/root1/preview -> D:/ComfyUI_windows_portable/ComfyUI/models/loras
[2025-03-25 20:34:33.705] Added route mapping: D:/ComfyUI_windows_portable/ComfyUI/models/loras -> /loras_static/root1/preview
[2025-03-25 20:34:33.706] Started monitoring: D:/ComfyUI_windows_portable/ComfyUI/models/loras
[2025-03-25 20:34:33.717] ### Loading: ComfyUI-Manager (V3.31.8)
[2025-03-25 20:34:33.717] [ComfyUI-Manager] network_mode: public
[2025-03-25 20:34:33.869] ### ComfyUI Revision: 3288 [75c1c757] *DETACHED | Released on '2025-03-21'
[2025-03-25 20:34:34.346] [0m[1;34m[SD Prompt Reader] [36mNode version: 1.3.4[0m
[2025-03-25 20:34:34.347] [0m[1;34m[SD Prompt Reader] [36mCore version: 1.3.5[0m
[2025-03-25 20:34:34.353] (pysssss:WD14Tagger) [DEBUG] Available ORT providers: AzureExecutionProvider, CPUExecutionProvider
[2025-03-25 20:34:34.353] (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider
[2025-03-25 20:34:34.425] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2025-03-25 20:34:34.426] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2025-03-25 20:34:34.446] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2025-03-25 20:34:34.491] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2025-03-25 20:34:34.537] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2025-03-25 20:34:34.591] # 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc'
A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'.
For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m
[2025-03-25 20:34:35.232] [92m[tinyterraNodes] [32mLoaded[0m
[2025-03-25 20:34:35.266]
Import times for custom nodes:
[2025-03-25 20:34:35.266] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
[2025-03-25 20:34:35.266] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-crystools-save
[2025-03-25 20:34:35.267] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_PRNodes
[2025-03-25 20:34:35.268] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Miaoshouai-Tagger
[2025-03-25 20:34:35.269] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
[2025-03-25 20:34:35.270] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Iterative-Mixer
[2025-03-25 20:34:35.270] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
[2025-03-25 20:34:35.270] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy-image-saver
[2025-03-25 20:34:35.271] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-reader-node
[2025-03-25 20:34:35.271] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-lora-manager
[2025-03-25 20:34:35.271] 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes
[2025-03-25 20:34:35.272] 0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes
[2025-03-25 20:34:35.272] 0.4 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Crystools
[2025-03-25 20:34:35.273] 0.6 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
[2025-03-25 20:34:35.273] 0.6 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-florence2
[2025-03-25 20:34:35.273] 0.9 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance
[2025-03-25 20:34:35.274] 1.8 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Griptape
[2025-03-25 20:34:35.274]
[2025-03-25 20:34:35.287] Starting server
[2025-03-25 20:34:35.288] LoRA Manager: Cache initialization completed
[2025-03-25 20:34:35.288] To see the GUI go to: http://127.0.0.1:8188
[2025-03-25 20:34:39.072] FETCH ComfyRegistry Data: 5/79
[2025-03-25 20:34:44.490] FETCH ComfyRegistry Data: 10/79
[2025-03-25 20:34:48.298] FETCH ComfyRegistry Data: 15/79
[2025-03-25 20:34:52.115] FETCH ComfyRegistry Data: 20/79
[2025-03-25 20:34:57.908] FETCH ComfyRegistry Data: 25/79
[2025-03-25 20:35:02.298] FETCH ComfyRegistry Data: 30/79
[2025-03-25 20:35:06.579] FETCH ComfyRegistry Data: 35/79
[2025-03-25 20:35:10.931] FETCH ComfyRegistry Data: 40/79
[2025-03-25 20:35:14.691] FETCH ComfyRegistry Data: 45/79
[2025-03-25 20:35:19.025] FETCH ComfyRegistry Data: 50/79
[2025-03-25 20:35:20.686] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.687]
[2025-03-25 20:35:20.688] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.688]
[2025-03-25 20:35:20.689] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.689]
[2025-03-25 20:35:20.690] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.690]
[2025-03-25 20:35:20.691] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.691]
[2025-03-25 20:35:20.692] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.692]
[2025-03-25 20:35:20.693] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.693]
[2025-03-25 20:35:20.694] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.694]
[2025-03-25 20:35:20.695] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.695]
[2025-03-25 20:35:20.696] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.697]
[2025-03-25 20:35:20.698] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.699]
[2025-03-25 20:35:20.700] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:20.700]
[2025-03-25 20:35:24.045] FETCH ComfyRegistry Data: 55/79
[2025-03-25 20:35:27.660] got prompt
[2025-03-25 20:35:27.672] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.672]
[2025-03-25 20:35:27.674] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.674]
[2025-03-25 20:35:27.675] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.675]
[2025-03-25 20:35:27.676] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.676]
[2025-03-25 20:35:27.679] Failed to validate prompt for output 55:
[2025-03-25 20:35:27.680] * (prompt):
[2025-03-25 20:35:27.681] - Value not in list: sampler_name: 'Illustrious\animij_v10.safetensors' not in (list of length 35)
[2025-03-25 20:35:27.681] - Value 1857 bigger than max of 100: quality_jpeg_or_webp
[2025-03-25 20:35:27.682] - Required input is missing: ckpt_name
[2025-03-25 20:35:27.682] - Value not in list: scheduler: 'euler' not in ['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal']
[2025-03-25 20:35:27.683] - Required input is missing: ckpt_hash
[2025-03-25 20:35:27.683] * Save Image w/Metadata 55:
[2025-03-25 20:35:27.683] - Value not in list: sampler_name: 'Illustrious\animij_v10.safetensors' not in (list of length 35)
[2025-03-25 20:35:27.684] - Value 1857 bigger than max of 100: quality_jpeg_or_webp
[2025-03-25 20:35:27.685] - Required input is missing: ckpt_name
[2025-03-25 20:35:27.685] - Value not in list: scheduler: 'euler' not in ['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal']
[2025-03-25 20:35:27.686] - Required input is missing: ckpt_hash
[2025-03-25 20:35:27.686] Output will be ignored
[2025-03-25 20:35:27.690] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.691]
[2025-03-25 20:35:27.692] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.692]
[2025-03-25 20:35:27.693] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.693]
[2025-03-25 20:35:27.694] [34m- [92m[WARNING]: ollama_base_url is not set[0m
[2025-03-25 20:35:27.695]
[2025-03-25 20:35:27.704] WARNING: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable\\ComfyUI\\input\\293311ae673e00258eb692921dd6aa19.jpg'
[2025-03-25 20:35:27.755] D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:118: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
[2025-03-25 20:35:29.522] elsa_\(frozen\), 1girl, solo, long_hair, breasts, looking_at_viewer, blue_eyes, large_breasts, blonde_hair, navel, jewelry, sitting, nipples, swimsuit, braid, thighs, earrings, parted_lips, spread_legs, mole, coat, covered_nipples, fur_trim, see-through, lips, one-piece_swimsuit, pubic_hair, muscular, single_braid, makeup, thick_thighs, piercing, abs, female_pubic_hair, hair_over_shoulder, breasts_apart, areola_slip, eyeshadow, toned, nose, realistic, white_one-piece_swimsuit, nipple_slip, slingshot_swimsuit, throne, pubic_hair_peek
[2025-03-25 20:35:29.676] # 😺dzNodes: LayerStyle -> [1;41mError loading model or tokenizer: Unrecognized configuration class <class 'transformers_modules.CogFlorence-2.1-Large.configuration_florence2.Florence2LanguageConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, DiffLlamaConfig, ElectraConfig, Emu3Config, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, GitConfig, GlmConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeSharedConfig, HeliumConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig, Zamba2Config.[m
[2025-03-25 20:35:29.697] !!! Exception during processing !!! 'NoneType' object is not callable
[2025-03-25 20:35:29.700] Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance\py\florence2_ultra.py", line 589, in florence2_image2prompt
results, output_image = process_image(model, processor, img, task, max_new_tokens, num_beams,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance\py\florence2_ultra.py", line 243, in process_image
result = run_example(model, processor, task_prompt, image, max_new_tokens, num_beams, do_sample)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance\py\florence2_ultra.py", line 214, in run_example
inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
[2025-03-25 20:35:29.702] Prompt executed in 2.01 seconds
[2025-03-25 20:35:30.296] FETCH ComfyRegistry Data: 60/79
[2025-03-25 20:35:34.707] FETCH ComfyRegistry Data: 65/79
[2025-03-25 20:35:38.418] FETCH ComfyRegistry Data: 70/79
[2025-03-25 20:35:43.728] FETCH ComfyRegistry Data: 75/79
[2025-03-25 20:35:48.000] FETCH ComfyRegistry Data [DONE]
[2025-03-25 20:35:48.083] [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
[2025-03-25 20:35:48.105] FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[2025-03-25 20:35:48.268] [ComfyUI-Manager] All startup tasks have been completed.
Other
No response
There are many people that are having this issue with Florence2 of ComfyUI_LayerStyle: https://github.com/chflame163/ComfyUI_LayerStyle/issues?q=is%3Aissue%20state%3Aopen%20TypeError%3A%20%27NoneType%27%20object%20is%20not%20callable
You should try using ComfyUI-Florence2 instead.
Or you can try downgrading transformers: https://github.com/chflame163/ComfyUI_LayerStyle/issues/322#issuecomment-2413785020 (the version in the comment is wrong apparently)
Open terminal inside python_embeded folder and run this command to downgrade transformers to 4.43.2 (the version set in the Layer Style requirements)
.\python.exe -m pip install transformers==4.43.2
that gives me this error ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. image-reward 1.5 requires fairscale==0.4.13, but you have fairscale 0.4.0 which is incompatible.
And by the way it worked for me, untill I updated last sunday, I do not think it is good to say "well many people have problems with it so forget about it", it worked perfectly for me, untill I updated comfy and all it's dependencies with the BAT file untill nothing worked anymore, I had to reinstall it and now it gives me the same error for layer advance nodes, I mean, just a suggestion but could you kind of work together instead of pointing fingers? Like we are going to upgrade these dependencies please follow along or smth? if 1 simple update keeps bricking everyones favourite node it's not fun anymore, it's a nuthouse. So what command now? Pip install fairscale? and so on and so on?
Nope: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchscale 0.3.0 requires fairscale==0.4.0, but you have fairscale 0.4.13 which is incompatible
that gives me this error ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. image-reward 1.5 requires fairscale==0.4.13, but you have fairscale 0.4.0 which is incompatible.
That conflict doesn't seem due to downgrading transformers
And by the way it worked for me, untill I updated last sunday, I do not think it is good to say "well many people have problems with it so forget about it", it worked perfectly for me, untill I updated comfy and all it's dependencies with the BAT file untill nothing worked anymore, I had to reinstall it and now it gives me the same error for layer advance nodes, I mean, just a suggestion but could you kind of work together instead of pointing fingers? Like we are going to upgrade these dependencies please follow along or smth? if 1 simple update keeps bricking everyones favourite node it's not fun anymore, it's a nuthouse. So what command now? Pip install fairscale? and so on and so on?
I've addressed your attention to the fact the other people are having this issue
Dependency conflict is a sensitive thing, you need to understand what you are doing
anyway after that last fairscale up or downgrade, it seems to be working again, now I have an issue with the loramanager, already chatting with the programmer of that one
Just installed a 5080 wanted to test my workflow guess what florence 2 is at it again, downgrading transformers as I type this, may have to do the fairscale also and hope it works then
fairscale==0.4.0 transformers==4.49.0 torchscale==0.3.0
Perfectly solved
I did all that but nothing work rather it break my comfyui setup
This is what i keep getting everything i try to run it.
One give me different errors.
No solution yet? EDIT: ended up abandoning this for the 2Run version