guided-diffusion-keras
guided-diffusion-keras copied to clipboard
concatenate the conditional and unconditional inputs to speed inference
hello,
I want to ask this code in diffuser.py
why it can speed inference?
could you explain it to me?
nn_inputs = [np.vstack([x_t, x_t]),
np.vstack([noise_in, noise_in]),
np.vstack([label, label_empty_ohe])]
Hey! The speedup happens in the next line: x0_pred = self.denoiser.predict(nn_inputs, batch_size=self.batch_size). Here we only have to call .predict once on the concatenated matrix which is faster than calling .predict twice on conditional and unconditional inputs.
Thank you! I got it!
And this part of the code in diffuser.py
I didn't know what this part did
What is the difference between x0_pred_label and x0_pred_no_label?
# classifier free guidance:
x0_pred = self.class_guidance * x0_pred_label + (1 - self.class_guidance) * x0_pred_no_label
if self.perc_thresholding:
# clip the prediction using dynamic thresholding a la Imagen:
x0_pred = dynamic_thresholding(x0_pred, perc=self.perc_thresholding)
x0_pred_label is the prediction conditioned on the text embedding and x0_pred_no_label is the unconditional prediction (where the text embedding input is 0).
Got it! Thank you!