da-source
da-source
> #### Motivation > * Reduce latency when multiple requests are required > * Stream output from the predictor as it's generated When will this feature become available?
> @da-source we haven't scheduled this one yet; we usually plan about two weeks at a time. > > Would it be possible to change your API implementation so that...
> @da-source we haven't scheduled this one yet; we usually plan about two weeks at a time. > > Would it be possible to change your API implementation so that...
> @mutal we haven't come up with a timeline for it yet. We'll keep this ticket updated as we go along. Is this urgent to you? > And to re-iterate...