Fanghua-Yu
Fanghua-Yu
Thanks! Since `ImageSlider` can only display single result, we added it as an optional choice.
I tried to replace SDXL UNet with [Juggernaut_RunDiffusionPhoto2_Lightning_4Steps.safetensors](https://huggingface.co/RunDiffusion/Juggernaut-XL-Lightning/blob/main/Juggernaut_RunDiffusionPhoto2_Lightning_4Steps.safetensors). With a diffusion step of 8 and CFG in 1.5-2, it shows an acceptable quality. Currently, I am trying to switch sampler...
> It runs more than 400 seconds and still nothing. It seems it's not working even on 4090 Get stuck and return nothing should be a RAM issue (i'm trying...
Currently, it is difficult to run with this requirement. We will publish the Online Demo as soon as possible.
> So my original image resolution is 360×239 > > whether Stage 2 Upscale is set to 1, 2, or 3.. the output is always 1536 x 1024 > >...
Hello, conditions for SDXL can be found here (`aesthetic_score` is useless for SDXL-base): https://github.com/Fanghua-Yu/SUPIR/blob/bca9a727b8a756ddaad7f13401631b5fde9f7f66/SUPIR/models/SUPIR_model.py#L154-L157
Hello, print `torch.cuda.device_count()` first. In your case, it should be 1, and now it returns zero.
@ivaxsirc We have not applied face restoration pipeline yet. Generally, like CodeFormer, it requires three stage, 1) face detaction 2) restoring each face independently 3) pasting face back. I'm working...
Hello, please update this repo and try to modify `CKPT_PTH.py` like this: ``` LLAVA_CLIP_PATH = '/opt/clip-vit-large-patch14-336' LLAVA_MODEL_PATH = '/opt/llava-v1.5-13b' SDXL_CLIP1_PATH = '/opt/clip-vit-large-patch14' SDXL_CLIP2_CACHE_DIR = '/opt/CLIP-ViT-bigG-14-laion2B-39B-b160k/open_clip_pytorch_model.bin' ```
> It's awesome a next level to restore old resolution, as you see the image it's perfect but the face needs improve, with topaz ai restore the face and the...