BentoML icon indicating copy to clipboard operation
BentoML copied to clipboard

bug: Concurrent requests with the streaming feature produce parallel calls to the runner

Open bruno-hays opened this issue 10 months ago • 1 comments

Describe the bug

To activate the streaming capability in bentoML, you require a Runnable function that yields an AsyncGenerator. Consequently, invoking this function returns promptly, regardless of ongoing computations that produce outputs. Consequently, the Runnable function is always deemed complete, initiating immediate processing for all service requests, irrespective of any ongoing computations from a prior generator. Consequently, there's no limit on the memory footprint of the runner.

To reproduce

No response

Expected behavior

The service should wait for the first AsyncGenerator to complete before requesting a new one.

A simple fix to this issue is to add a lock at the start of the runnable method:

    def __init__(self):
        self.predict_lock = threading.Lock()

    def predict(self, input) -> AsyncGenerator[str, None]:
        with self.predict_lock:
              # compute and yield whatever
               pass

I think this locking mechanism should either be implement on the side of bentoML or its necessity should be made clear in the documentation

Environment

bentoml==1.1.4

bruno-hays avatar Mar 29 '24 17:03 bruno-hays

Definitely runner can run in parallel especially for the VLLM case we can do batching to enhance performance. We can not make such assumption in BentoML. BTW, If you want to control the concurrency, you can specify the max_concurrency by @bentoml.service decorator.

Of course, you can do such locking mechanism in your bento. Hope that answered your question

xianml avatar May 31 '24 13:05 xianml