multidiffusion-upscaler-for-automatic1111
multidiffusion-upscaler-for-automatic1111 copied to clipboard
Tiled Upscaler only works in webui but not in API
Hi all,
I'm trying to upscale an image using Tiled diffusion (and tiled VAE). Here are my parameters :
As you can see, it works well. The second output image comes from the upscaler.
[Tiled Diffusion] upscaling image with 4x-UltraSharp...
tiled upscale: 100%|████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 4.78it/s]
But when I apply the same parameters on the API :
payload = {
"init_images": [
"myimage"
],
"seed": 3073412432,
"sampler_index": "DPM++ 3M SDE Karras",
"steps": 6,
"denoising_strength": 0.35,
"cfg_scale": 1,
"prompt": "best quality, highres, <lora:more_details:1.0> <lora:SDXLrender_v2.0:1> <lora:LCM_LoRA_Weights_SD15:1>",
"negative_prompt": "(worst quality, low quality, normal quality:2)",
"width": img_output_width,
"height": img_output_height,
"alwayson_scripts": {
"Tiled VAE": {
"args": [ True, # enable
1024, 96, # encoder tile size, decoder tile size
True, # vae to gpu
True, # fast decoder
True, # fast encoder
False] # color fix
},
"Tiled Diffusion": {
"args": [ True, # enable
"MultiDiffusion", # method
True, # override size
True, # keep input size
img_output_width, # image width
img_output_height, # image height
112, # tile width
144, # tile height
4, # tile overlap
8, # tile batch size
"4x-UltraSharp", # Upscaler name
scale] # Scale factor
},
}
}
The upscaler pass returns immediately and the output upscaled image seems to be a simple Lancoz or Nearest upscale... I tried with different upscaler and they all returns a simple stretched resize.
Did I miss something ?
Thanks for your help
I have encountered the same problem. Have you resolved it
my email [email protected] thinks
facing the same issue
Have you tried ONLY using Tiled Diffusion via the API? Does that work for you? I am able to use Tiled Diffusion via the API using this call as my alwayson_scripts json ( I am using Python )
"alwayson_scripts": {
"tiled diffusion": {
"args": [
True, # enable
"Mixture of Diffusers", # method, eg MultiDiffusion
True, # Overwrite image size
True, # Keep input image size
1024, # Image width
1024, # Image height
96, # Latent tile width
96, # Latent tile height
48, # Latent tile overlap
2, # Latent tile batch size
"4x-UltraMix_Balanced", # Upscaler
2, # Scale Factor
True, # Enable Noise Inversion
10, # Inversion steps
1, # Retouch
1, # Renoise strength
64, # Renoise kernel size
False, # Move ControlNet tensor to CPU (if applicable)
False, # Enable Control
False, # Draw full canvas background
False, # Causalize layers
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 1
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 2
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 3
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 4
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 5
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 6
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 7
False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 8
]
}
}
It eats a ton of my VRAM though ( takes all 24gb of it and locks up my PC). I must be doing something wrong, but it also does that for me in the UI.
I have not tried using the tiled VAE with this though. I would test trying this alone first.
I have used tiled diffusion and control net, it works fine.
On Sat, 27 Jul 2024, 09:04 Aaron Olson, @.***> wrote:
Have you tried ONLY using Tiled Diffusion via the API? Does that work for you? I am able to use Tiled Diffusion via the API using this call as my alwayson_scripts json ( I am using Python )
"alwayson_scripts": { "tiled diffusion": { "args": [ True, # enable "Mixture of Diffusers", # method, eg MultiDiffusion True, # Overwrite image size True, # Keep input image size 1024, # Image width 1024, # Image height 96, # Latent tile width 96, # Latent tile height 48, # Latent tile overlap 2, # Latent tile batch size "4x-UltraMix_Balanced", # Upscaler 2, # Scale Factor True, # Enable Noise Inversion 10, # Inversion steps 1, # Retouch 1, # Renoise strength 64, # Renoise kernel size False, # Move ControlNet tensor to CPU (if applicable) False, # Enable Control False, # Draw full canvas background False, # Causalize layers False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 1 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 2 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 3 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 4 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 5 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 6 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 7 False, 0, 0, 0, 0, "", "", "Background", 0, -1, # region 8 ] } }
It eats a ton of my VRAM though ( takes all 24gb of it and locks up my PC). I must be doing something wrong, but it also does that for me in the UI.
I have not tried using the tiled VAE with this though. I would test trying this alone first.
— Reply to this email directly, view it on GitHub https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/issues/365#issuecomment-2253731229, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJX2AWYDMCXOVZDU3MLYFBTZOMIMDAVCNFSM6AAAAABE2C46COVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJTG4ZTCMRSHE . You are receiving this because you commented.Message ID: <pkuliyi2015/multidiffusion-upscaler-for-automatic1111/issues/365/2253731229 @github.com>