start multiple types of models, with any number of instances for each
🚀 Feature
For example, I have models A, B, and C. Now, I want to start one instance of model A, two instances of model B, and two instances of model C. I have also defined the input and output classes for these models.
Is it possible to launch a service on a single port that handles one model A, two model Bs, and two model Cs, and also implements a round-robin mechanism for load balancing across the instances?"**
Let me know if you'd like a more technical version of this for a proposal, API design, or GitHub README.
Motivation
"I am making this request because, in my work, I often use a large model (such as an LLM) along with some smaller models (like OCR). Sometimes, we prefer to have multiple instances of the OCR models, while the LLM model may only need to have one or two instances."
hey @ywh-my, thanks for creating the feature request! We will be implementing this. The first part - multiple LitAPI endpoints will be covered in this PR and a follow up PR will be created to have different number of workers for each of them.
Thanks for your hard work!
Hi @aniketmaurya, quick question on this — are we planning to move and configure workers_per_device only at the LitAPI level?
I was thinking we could possibly support both levels, with the LitAPI config taking precedence if provided, and falling back to the server-level default otherwise. Would love to hear your thoughts on this!