serve
serve copied to clipboard
asyncio example
📚 Documentation
When doing batch inferencing the default way a lot of python programmers will go about it is making a call with requests. Unfortunately the library is synchronous so users will get confused by results that are always batch_size=1 so instead would like to add an asyncio example in our getting started guide
@msaroufim could i have a look at this?
Yup go for it!
Hello, is anyone working on this?
@msaroufim can you please further clarify the purpose of the guide?
@LuigiCerone We have an example here https://github.com/pytorch/serve/blob/master/examples/cloud_storage_stream_inference/stream_inference.py#L25
If you don't think it's enough lmk and we can expand it more