WAS
WAS
I have CUDA installed, however this error persists. This is unfortunate, as now llama-index is inoperable on Windows.
I have this installed on windows now, not sure what the steps were to get it to, but it works under normal windows python venv for llama-index. Not sure if...
> @WAS-PlaiLabs > > What did you have to change locally in order for it to install? I'm not positive. I think it's cause I updated build tools. But when...
Console logs would be the most beneficial thing to solve any import errors of the custom_node. If you have them, please copy them and paste or upload them here.
I don't understand your problem. The token system is persistent, so if you have something like `portrait_prefix` and it's value `A 3d matte portrait of` you could use that in...
It works for SDXL, you just can't mix it with other non-SDXL models/conditioning. One SDXL Model:  Two SDXL models: 
Yes, I'll squeeze that in next update.
`{a|b|c}` would result in a random value of those, as that's how dynamic prompts work in ComfyUI. Same for CLIPTextEncode.
Ofc it does. Otherwise it would have `'dynamicPrompts': False` as part of the multiline options. This is base ComfyUI behavior for any multiline text field without explicitly disabling it. ...
It did, and I just removed that ability by disabling `"dynamicPrompts": False` in latest PR. Regarding regular CLIPTextEncode, it works fine when I convert inputs, and I do that regularly...