VoiceCraft
VoiceCraft copied to clipboard
Batch Inference
Hi, Is it possible to batch inference like LLMs do? Such as provide 10 transcripts and batch the requests to increase total throughput?
TTS inference is in batch mode, it's just right now running the same transcript and selecting the shortest. Need some tweak in the code to support batch processing
isn't inference_tts_scale.py the way to do batch inference?