diart
diart copied to clipboard
Serve models for parallel benchmarking
Problem
The amount of parallel pipelines that can run in Benchmark is limited because the models need to be copied in each process.
Idea
Serve models in a separate process (e.g. in a Docker container) and make Parallelize send batches to them as needed.
This could also have the advantage of running pipelines on low-resource devices by using remotely deployed segmentation and embedding models.
Example
diart.benchmark /wav/dir --reference /rttm/dir --segmentation 192.168.0.20:7007 --num-workers 16
and even:
diart.stream microphone --segmentation 192.168.0.20:7007 --embedding 192.168.0.20:7008