serve
serve copied to clipboard
Decoding video frames on TS side
📚 The doc issue
To maximize throughput I am looking to have the client batch multiple frames into a single request before sending to TS. However, I am struggling to decode them back into tensors or arrays on the TS side (i.e. bytearray to tensor). The only way I've been able to do it so far is to write the frames to disk and then read it back in using cv2 (on the TS side). Does anyone know if there is a way to decode the frames without writing to disk?
Suggest a potential alternative/fix
No response
@avickars Can you please share the code you have tried. I can try to repro this
@avickars Have you looked at this example where a video is being passed. I believe you can modify this to pass a batch of images and decode it in the handler https://github.com/pytorch/serve/tree/master/examples/MMF-activity-recognition
@agunapal Appreciate the response! So that example is essentially my code actually. I copied it exactly and it actually doesn't work (have tried a variety of other avenues with no luck). The only difference between my code and the example is that I was replicating that example in a workflow, and was performing the decoding in the workflow pre-processor (to be clear I was not able to decode it in the pre-processor). There shouldn't be anything preventing it from working in the a workflow right? Or am I missing something? Ill post the error I'm getting shortly.
Thanks so much!
To follow up, below is the torchserve output I am getting:
2022-07-27T22:58:17,630 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - video_tensor, audio_tensor, info = torchvision.io.read_video(video) 2022-07-27T22:58:17,630 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/site-packages/torchvision/io/video.py", line 267, in read_video 2022-07-27T22:58:17,630 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - if not os.path.exists(filename): 2022-07-27T22:58:17,631 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - File "/opt/conda/lib/python3.8/genericpath.py", line 19, in exists 2022-07-27T22:58:17,630 [ERROR] ForkJoinPool.commonPool-worker-3 org.pytorch.serve.ensemble.DagExecutor - org.pytorch.serve.http.InternalServerException: Failed to execute workflow Node after 0 attempts : Error executing pre_processing 2022-07-27T22:58:17,631 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - os.stat(path) 2022-07-27T22:58:17,631 [INFO ] W-9003-w1__pre_processing_1.0-stdout MODEL_LOG - TypeError: stat: path should be string, bytes, os.PathLike or integer, not _io.BytesIO
The error is of course coming from here "https://github.com/pytorch/serve/blob/2d9c7ccc316f592374943a1963c1057bbe232c9e/examples/MMF-activity-recognition/handler.py#L117" in the example. Where the cause is the io.Bytes(data['data']) right above it.
Really Im just wondering if there is another way to read in the video?
@avickars I tried something like this and it works
You can one process which is reading frames from a device and putting the frames in a buffer
Have another process to read this buffer frame by frame, convert to bytes and make requests.
I am still trying how to pass a batch of images.
data = open(<file>, 'rb')
response = requests.post('http://localhost:8080/predictions/resnet152', data=data)
@avickars I tried the following and it works.
If you are using opencv to read the video, add the following code in your while loop
Please let me know if this works for you.
data = cv2.imencode('.jpg', frame)[1].tobytes()
response = requests.post('http://localhost:8080/predictions/resnet152', data=data)
Please re-open if the above recommended solution didn't work