fury
fury copied to clipboard
Fury/VTK Streaming: webrtc/rtmp
As well know VTK provides a set of tools to perform the streaming of the rendered result. This streaming it's expensive, because uses jpg's images consumed as a base64 in the browser. In addition, sometimes we just want to stream the rendering result, with no interaction, to a large audience
@filipinascimento and I have been discussing how to perform this streaming which I need for my PhD and for the Summer of Code project.
We already have some results to achieve this through WebRTC and opengl buffers. But, we need to define how to organize the code.
I think we can create an object inside the fury/window.py which will be responsible to get the buffer (FuryStreamClient). Otherwise, maybe the Fury Stream Server it's more suitable for a different repo, because it has no dependency with fury.
My proposal it's to have three ways to perform the communication between FuryStreamClient and FuryStreamServer
Communication using pipe
$ python viz_example.py | furystream --webrtc --youtube --youtube_key=.... --webrtc_port=8888
# example.py
from fury.window import FuryStreamClient
...
ms = 16 # ms refresh rate, 0 means a renderevent observer isntead of timer
stream = FuryStreamClient(
showm, True image_buffer=None, info_buffer=None, broker_url=None)
stream.init(ms,)
....
showm.start()
Pros
- Simple
- good performance
- ???
Cons
- non-agnostic
- works only in the same machine
- ???
Communication using a broker
$ furystream --broker --webrtc --youtube --youtube_key=.... --webrtc_port=8888 --broker_port=5555
$ python viz_example.py
# example.py
from fury.window import FuryStreamClient
...
ms = 16 # ms refresh rate, 0 means a renderevent observer isntead of timer
stream = FuryStreamClient(
showm, False, image_buffer=None, info_buffer=None, broker_url=broker_url)
stream.init(ms,)
....
showm.start()
Pros
- Simple
- Can distribute the computation results into different machines.
- Agnostic (different from pipe)
- ???
Cons
- Depends of another protocol and lib (zmq)
- Slow than pipe and shared memory approaches
- ???
3 Communication using shared memory and forks
$ python viz_example.py
# example.py
from fury.window import FuryStreamClient
from fury_stream_server import furry_webrtc, furry_youtube
...
process_wrtc = multiprocessing.Process(target=furry_webrtc, args=(image_buffer, info_buffer))
process_wrtc.start()
...
ms = 16 # ms refresh rate, 0 means a "RenderEvent" observer instead of "TimerEvent"
stream = FuryStreamClient0(
showm, False, image_buffer=image_buffer, info_buffer=None, broker_url=broker_url)
stream.init(ms,)
....
showm.start()
Pros
- Faster
- Low memory consumption
- ??
Cons
- Works only in the same machine
- shared memory has a prefixed size. We need to take care to deal with resize events.
- ??
Hi @devmessias,
Thank you for pointing that out. That's great!
Otherwise, maybe the Fury Stream Server it's more suitable for a different repo, because it has no dependency with fury.
I think it should stay on the same repo
Could you add the branch or a link to your code in this issue. It would be good if everyone can test this. I think we will discuss it during an open meeting.
If someone is interesting by the discussion, you can also look at our discord channel
Hi @skoudoro.
So far, I only have a lot of ugly disconnected scripts, but I will work on PR #437 and soon I will send a new pull request with the webrtc server using the shared memory approach
Closing this issue because #437 it's already merged