StableSR icon indicating copy to clipboard operation
StableSR copied to clipboard

How to inference with batch size >1?

Open wenyuqing opened this issue 2 years ago • 2 comments

Hi, I have a large amount of images to process and the time is limited, I wonder how to inference with batch size larger than 1? I noticed that in the test pipeline sr_val_ddpm_text_T_vqganfin_oldcanvas.py, only bs=1 is supported. I tried to change the batchsize to 4, I do the following modifications to try to read 4 images as a batch:

# for n in trange(len(init_image_list), desc="Sampling"):
	# init_image = init_image_list[n]
 for n in trange(0, len(init_image_list), 4, desc="Sampling"):
	  init_image = init_image_list[n:n+4]
	  img_list_part = img_list[n:n + 4]
	  init_image= torch.cat(init_image,dim=0)

and then I noticed that the out put of this function: samples, _ = model.sample_canvas(cond=semantic_c, ...). is (4, ***) But only the first of the generated 4 images is normal, the rest are like this: image

I'm looking forward to your quick reply!

wenyuqing avatar Nov 08 '23 16:11 wenyuqing

You may check here

IceClear avatar Nov 09 '23 04:11 IceClear

Thanks for your quick reply! So how should I change this? Could you please provide more details if possible? I'm still a little confusing.

wenyuqing avatar Nov 09 '23 04:11 wenyuqing