open_clip
open_clip copied to clipboard
Batched CoCa Generation
Modified CoCa generation to allow more straightforward batching: From:
with torch.no_grad(), torch.cuda.amp.autocast():
generated = model.generate(imgs)
To:
with torch.no_grad(), torch.cuda.amp.autocast():
generated = model.generate(imgs, device=device, batch_size=20)
OR
with torch.no_grad(), torch.cuda.amp.autocast():
generated = model.generate(imgs, device=device)
-
device
: Required argument since previous implementation was inferring device from image input, which would result in error if model and image were on separate devices. With this update, users can now directly set the generation to the same device as the model. Additionally, there was previously no guarantee that image and text were on the same device when text was passed as an argument. -
batch_size
: Optional argument that iterates over the input texts and images with set batch size. If batch_size=None`, then use the input size as batch.