taming-transformers
taming-transformers copied to clipboard
Multi GPU Inference through nn.DataParallel
trafficstars
Do you (intend to) support multi gpu inference through the torch's nn.DataParallel class?
For example:
model = vqgan.VQModel(**config.model.params)
model = nn.DataParallel(model)
Thank you!
Ive tried same - it doesn't speed up, but u can starr multiple model for users