rgthree-comfy
rgthree-comfy copied to clipboard
Power Lora Loader string output?
Hi! Efficiency nodes has a Lora Stack to String converter node that I (want to) use to save the used loras with the image meta data with this custom nodes: https://github.com/alexopus/ComfyUI-Image-Saver. I can already do that with the Wildcard Encode node from the Inspire Pack, but doing it with a lora stacker would be cooler. Could you make this compatible with the Power Lora Loader node? Maybe an output to expose the converted strings or something? Because I like your Lora Loader even better than these lora stackers.
Thank you!
I second this request! I would love a node output that shows a list of all LoRAs activated in the Power LoRA Loader node. It would make logging and troubleshooting much easier.
Thank you.
3rd this request! I was actually just coming here to make the same request - it would be nice if it could be compatible with ComfyUI-Lora-Auto-Trigger-Words or if there were a method implemented into rgthree nodes to get lora trigger words either from comfyui or an associated civitai.info file.
This gets part of the way there.
Out of all LoRA-Loaders i think the Power LoRA Loader is the most comfortable one. I'd like to embedd the Metadata with it too, so i highly second the addition from @DrJKL
Alternatively having second node 'Power Lora Stack' would be great.
Oh, now I see it's actually WIP
This gets part of the way there.
I came here looking for this feature, I've been wanting to add metadata, and this is only lora loader I like.
Unfortunately, I'm no coder, but it's interesting where technology has gone... so with some AI help I was able to get a bit of a dirty solution to this. Should anyone want to use it I'll post the code here. Just replace the "power_lora_loader.py" contents:
from nodes import LoraLoader
from .constants import get_category, get_name
from .power_prompt_utils import get_lora_by_filename
from .utils import FlexibleOptionalInputType, any_type
import os
class RgthreePowerLoraLoader:
""" The Power Lora Loader is a powerful, flexible node to add multiple loras to a model/clip. """
NAME = get_name('Power Lora Loader')
CATEGORY = get_category()
@classmethod
def INPUT_TYPES(cls): # pylint: disable = invalid-name, missing-function-docstring
return {
"required": {
"model": ("MODEL",),
"clip": ("CLIP",),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {},
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING")
RETURN_NAMES = ("MODEL", "CLIP", "LORA_FILENAMES")
FUNCTION = "load_loras"
def load_loras(self, model, clip, **kwargs):
"""Loops over the provided loras in kwargs and applies valid ones,
returning their filenames with the specified format."""
filenames = [] # List to store formatted filenames
for key, value in kwargs.items():
key = key.upper()
if key.startswith('LORA_') and 'on' in value and 'lora' in value and 'strength' in value:
strength_model = value['strength']
strength_clip = value['strengthTwo'] if 'strengthTwo' in value and value['strengthTwo'] is not None else strength_model
if value['on'] and (strength_model != 0 or strength_clip != 0):
lora_filename = value['lora']
lora_name = os.path.splitext(os.path.basename(lora_filename))[0] # Get filename without extension
# Format the output string as <lora:filename:strength>
formatted_string = f"<lora:{lora_name}:{strength_model}>"
model, clip = LoraLoader().load_lora(model, clip, lora_filename, strength_model, strength_clip)
filenames.append(formatted_string) # Add the formatted string to the list
# Concatenate all filenames into a single string, separated by commas
filenames_str = ", ".join(filenames)
# Return model, clip, and the concatenated string of filenames
return (model, clip, filenames_str)
Likely not that well written, but it does the job. Outputs the names and strength into a string for image metadata saving
thank you Hattori0 - this was great help for me
i really needed that function - i changed the code slightly, so it outputs the lora name but also the full path (each in a new line)
this is useful in my case as i have loras in different folders and the paths can help automate some processes and reduce node bloats - just in case someone else like the change:
`from nodes import LoraLoader from .constants import get_category, get_name from .power_prompt_utils import get_lora_by_filename from .utils import FlexibleOptionalInputType, any_type
import os
class RgthreePowerLoraLoader: """ The Power Lora Loader is a powerful, flexible node to add multiple loras to a model/clip. """
NAME = get_name('Power Lora Loader')
CATEGORY = get_category()
@classmethod
def INPUT_TYPES(cls): # pylint: disable = invalid-name, missing-function-docstring
return {
"required": {
"model": ("MODEL",),
"clip": ("CLIP",),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {},
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
RETURN_NAMES = ("MODEL", "CLIP", "LORA NAMES", "LORA PATHS")
FUNCTION = "load_loras"
def load_loras(self, model, clip, **kwargs):
"""Loops over the provided loras in kwargs and applies valid ones,
returning their filenames with the specified format."""
paths = []
filenames = [] # List to store formatted filenames
for key, value in kwargs.items():
key = key.upper()
if key.startswith('LORA_') and 'on' in value and 'lora' in value and 'strength' in value:
strength_model = value['strength']
strength_clip = value['strengthTwo'] if 'strengthTwo' in value and value['strengthTwo'] is not None else strength_model
if value['on'] and (strength_model != 0 or strength_clip != 0):
lora_filename = value['lora']
lora_path = os.path.abspath(os.path.join(folder_paths.get_folder_paths("loras")[0], lora_filename))
lora_name = os.path.splitext(os.path.basename(lora_filename))[0]
formatted_string = f"<lora:{lora_name}:{strength_model}>"
model, clip = LoraLoader().load_lora(model, clip, lora_filename, strength_model, strength_clip)
filenames.append(formatted_string) # Add the formatted string to the list
paths.append(lora_path)
# Concatenate all filenames into a single string, separated by commas
filenames_str = "\n".join(filenames)
paths_str = "\n".join(paths)
# Return model, clip, and the concatenated string of filenames
return (model, clip, filenames_str, paths_str)`
Sorry for the dumb question, but is it possible to make this node send further only the Lora names and strength, without their paths and other information like that:
"loras": "{'on': False, 'lora': '__IL\\testing\\250418\\Jasmine_Flamesworth_The_Beginning_After_the_End.safetensors', 'strength': 0.9999999999999997},
I guess it will make the node compatible with Civitai metadata, which is now an issue...
here's a dumb answer (no much of a coder here) - but it works fine for me.
i changed the code even further to get more info i might need
i have each lora coupled with a text file in which i manually wrote the parameters required of the lora i then renamed the extension lor to better distinguish the file in the loras folder. inside the lor text file, the first line contains the info that will go in the clip prompt. (trigger, strength etc) below that any other i might require.
so sketch.lora + sketch.lor
now, that lora loader gives me the path, i use a text loader node to read the first line and inject it in the prompt. plus i load the other info inside a display node in case i want other info from the lor file.
then i just concatenate the text prompts and it works
i have done a very similar thing with checkpoints and it's fab
hopefully it's useful - workflow, files screengrab, and code below:
` from nodes import LoraLoader from .constants import get_category, get_name from .power_prompt_utils import get_lora_by_filename from .utils import FlexibleOptionalInputType, any_type
import os import folder_paths
class RgthreePowerLoraLoader: """ The Power Lora Loader is a powerful, flexible node to add multiple loras to a model/clip. """
NAME = get_name('Power Lora Loader')
CATEGORY = get_category()
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"model": ("MODEL",),
"clip": ("CLIP",),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {},
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING", "STRING", "STRING")
RETURN_NAMES = (
"MODEL",
"CLIP",
"LORA_NAMES",
"LORA_PATHS",
"LORA_TRIGGERS",
"LORA_PROMPTS"
)
FUNCTION = "load_loras"
def load_loras(self, model, clip, **kwargs):
names = []
paths = []
triggers = []
combined = []
for key, value in kwargs.items():
key = key.upper()
if key.startswith('LORA_') and 'on' in value and 'lora' in value and 'strength' in value:
strength_model = value['strength']
strength_clip = value.get('strengthTwo', strength_model)
if value['on'] and (strength_model != 0 or strength_clip != 0):
lora_filename = value['lora']
lora_path = os.path.abspath(os.path.join(folder_paths.get_folder_paths("loras")[0], lora_filename))
lora_name = os.path.splitext(os.path.basename(lora_filename))[0]
# Get trigger from .lor file
lor_path = lora_path.replace(".safetensors", ".lor")
try:
with open(lor_path, 'r', encoding='utf-8') as f:
trigger = f.readline().strip()
except Exception:
trigger = ""
formatted_string = f"<lora:{lora_name}:{strength_model}>"
model, clip = LoraLoader().load_lora(model, clip, lora_filename, strength_model, strength_clip)
names.append(formatted_string)
paths.append(lora_path)
triggers.append(trigger)
combined.append(f"{formatted_string}, {trigger}")
return (
model,
clip,
"\n".join(names),
"\n".join(paths),
"\n".join(triggers),
"\n".join(combined)
)
`
Thanks, it gives me something to think about =)
Problem solved, SaveImageWithMetaData node recognises Lora info correctly
Okay, long time coming. But I was playing with comfyui again and got rid of all my nodes packs. There were a couple basic ones I had that were doing various things. Basic arithmetic, string concatenation, etc. So I thought I could do it all in one powerful node, the Power Puter: a flexible, pythonic computation node that can handle something as basic as string concatenation, to node evaluation and list comprehension.
Then I started thinking that this node could access the prompt with needing to be connected to another number or string. Things like grab the node inputs of an evaluating prompt. So, with that, along with some shortcuts for Power Lora Loader specifically, you can grab the enabled loras from a Power Lora Loader node from the evaluating prompt, and output them as a string:
The Power Puter is pretty beta and a power-user feature, but try it out. It will error if the evaluated code can't do something, like find the desired node, for instance. So if you mute your Power Lora Loader, you'd need to mute the Power Puter looking it up or you'll get an error.
More examples for the Power Lora Loader, showing more data too:
I monitor your commits, @rgthree, so I was this new node coming and I immediately thought it could be used for this purpose. I haven't tried it so far because, after 2 years, I still don't know if, when I distribute APW to people the nodes ID changes in their systems or not. But given that now you have shown how to identify an input node by name, I'll definitely try.
In my understanding, you manipulate direct and indirect inputs via Python language inside the node UI. What else can this node do? What are the limitations (aka can it run any type of Python code)?
I am trying to understand the full potential as well as the security risks attached to it.
Update: WHOA!
(I tried to manipulate the string a bit to remove "FLUX-based" and ".safetensors" but I get all sort of errors, so I'll have to understand better what are the limitations of this node)
Thanks for sharing another great node!
In my understanding, you manipulate direct and indirect inputs via Python language inside the node UI. What else can this node do? What are the limitations (aka can it run any type of Python code)?
The node interprets input code’s Abstract Syntax Tree, controlling the evaluation. Basically, it steps through each part of the code and determines what to do (and what it wants to allow).
Every statement’s initial entry needs to be either an object from an input, a node from the prompt, a primitive, list comprehension, or f-string. There are also some manually exposed functions that are exposed, like “ceil” and “random_choice” etc.
Because of this, it is NOT possible to author random python code from this node and expect it to be executed. Further, its controlled nature makes it secure in that it can only call and access from these entry instances.
Of course, if there was already malicious code on the user’s platform this node could execute it, but that’s true of any node. This node’s text box cannot author malicious code.
…but I get all sort of errors, so I'll have to understand better what are the limitations of this node
Let me know what errors you get. The node is pretty fresh, interested to see how ppl use it. The errors are likely from the “manually evaluation” work I mention above, and not handling specific cases that perhaps could be handled.
Thanks for the clarification. One of the things I tried to do was to filter multiple input node names with a wildcard. I wanted a way to say: "Find all nodes that include the string 'LoRA for' irrespective of prefix and suffix."
I used OpenAI o3 to write the code and it offered multiple approaches, each leading to a different error. I got three different dead ends:
- 'UnaryOp' object has no attribute 'value'
- <ast.List object at 0x00000235B9E3EAD0>'
- a tuple error I didn't save
(I understand these are quite useless without seeing the code that generated them. I can provide it if useful.)
In the end, I showed o3 your source code, and that helped define the limits of what we could do. So I settled for the code you see in the screenshot which, in my understanding, looks at every node in APW.
I don't know if it's the most efficient way to get what I wanted, but it's the only one that worked. So, for now, I have no other pending issues.
The only thing I could think of is that I have another potential use case where it would be nice to have 3 input pins instead of 2. But the main thing I needed to solve, it's now solved.
I see a new commit enabling the search for node by title, thank you! I don't know if this is intentional, but that change doesn't reflect in the version available in ComfyUI Manager. If you check for updates, nothing new shows up for your node suite.
It looks like Manager is up to date. There hasn't been a commit since I closed this issue; search by node title was always there. I don't have a way to search by only a part of a node title, but your image's example looks right; a little verbose for users to input, but it's still working as expected there.
One of the things I tried to do was to filter multiple input node names with a wildcard. I wanted a way to say: "Find all nodes that include the string 'LoRA for' irrespective of prefix and suffix."
Okay, I've just added a mechanism to compile a regex, and then use that in the nodes call in the latest.
NOTE: I also introduces a ~breaking change that nodes() now returns a list, rather than a dict of id to node data. The node is new enough I feel OK doing this, though do not intend to introduce breaking changes from here on.
You can now search for multiple nodes by title using re which compiles a regex that will then use re.search. Your above logic could now be something like:
all_loras = [
lora.name.replace('.safetensors', '')
for node in nodes(re('LoRAs for'))
for lora in node.loras
]
', '.join(all_loras)
It works great, thank you.
For other people reading, I slightly modified your suggested code to remove the folder names from the LoRA names with the following final code:
p = re('.*[\\\\/]')
all_loras = [
p.sub('', lora.name)
.replace('.safetensors', '')
for node in nodes(re('FLUX LoRAs for'))
for lora in node.loras
]
', '.join(all_loras)
hahaha - wish i could understand what the heck i'm looking at...
in the images above, i don't see any connections to a and b inputs... what to connect?
would i be using this puter node to extract the paths of the loras in the powerlora loader? write some code to do so - or have i misundestood?