Álvaro Somoza
Álvaro Somoza
@yiyixuxu yeah, SDE looks good here, in fact they all look better than the tests I did, going to try with some photo realistic prompts.
I've tested with more prompts and I like the results. |25|10|ays| |---|---|---| |||| With SDE I get the same bad results as comfyui: |25|10|ays| |---|---|---| |||| Also I know they're...
@yiyixuxu maybe is that have bad seed luck and found about it, also it seems that it happens more with the finetunes than the base model. This is the code...
@yiyixuxu Since this is a really easy fix it could be tagged with `contributions-welcome` and `good first issue`. If no one takes it I can do it.
My two cents here is that marigold should be added to the core now, I like it a lot and with LCM it should be fast. The model has almost...
After reading about this issue this is not a problem with `diffusers` but an environment issue with the users? Also we cannot help you with that repository because they [use](https://github.com/bmaltais/kohya_ss/blob/master/requirements.txt)...
Hi, this is produced because you're using `from_single_file`, `StableDiffusionInpaintPipeline` and a normal model, you'll need to add `num_in_channels=4` ```python pipeline = StableDiffusionInpaintPipeline.from_single_file( model_path, torch_dtype=torch.float16, num_in_channels=4).to("cuda") ``` You can read more...
oh I missed that, you're using a SDXL model, you'll need to use the `StableDiffusionXLInpaintPipeline` ```python device = "cuda" model_path = "weights/realisticStockPhoto_v20.safetensors" pipe = StableDiffusionXLInpaintPipeline.from_single_file( model_path, torch_dtype=torch.float16, num_in_channels=4).to("cuda") pipe.load_lora_weights(".", weight_name="weights/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors",...
Hi, you need to add the `--from_safetensors` arg.
I don't quite understand your whole problem because you're mixing a lot of things in the same issue. Compel or textual inversion are unrelated to the mask so they don't...