petals
petals copied to clipboard
batch processing/parallel processing
Hi there, does Petals currenly support batch processing/parallel processing? For example, to increase resource usage or system throughput, we would like to see servers parallelly processing multiple prompts at the same time, aka batch processing. Is this possible? Thanks a lot.
Hi! Both forward/backward and autoregressive inference can run with any batch size, provided that you have enough memory for that.
In our training examples, we use batched training, e.g. this one https://github.com/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb as a batch size of 32