How to inference with batch size >1?
Hi, I have a large amount of images to process and the time is limited, I wonder how to inference with batch size larger than 1? I noticed that in the test pipeline sr_val_ddpm_text_T_vqganfin_oldcanvas.py, only bs=1 is supported. I tried to change the batchsize to 4, I do the following modifications to try to read 4 images as a batch:
# for n in trange(len(init_image_list), desc="Sampling"):
# init_image = init_image_list[n]
for n in trange(0, len(init_image_list), 4, desc="Sampling"):
init_image = init_image_list[n:n+4]
img_list_part = img_list[n:n + 4]
init_image= torch.cat(init_image,dim=0)
and then I noticed that the out put of this function: samples, _ = model.sample_canvas(cond=semantic_c, ...). is (4, ***)
But only the first of the generated 4 images is normal, the rest are like this:
I'm looking forward to your quick reply!
You may check here
Thanks for your quick reply! So how should I change this? Could you please provide more details if possible? I'm still a little confusing.