render
render copied to clipboard
Piping frames to ffmpeg
Summary
This pull request updates a broken link and rewrites some code to make it compatible with piping frames.
Changes
- Updated the broken link in the README file.
- Rewrite some code to make it compatible with piping frames. This means that images can now be piped directly to ffmpeg without being converted to PNG files and written to disk (This change requires a lot of changes to RenderProcessor, but there may be a different way to achieve this compatibility. I am new to Dart, so I am not sure what the best approach is).
Testing
I have tested these changes locally and they appear to work as expected. Please let me know if you have any questions or concerns.
Thanks,
Thanh Pham
This is just a prototype, but I hope it helps you save some time! Feel free to change!
It looks good to me now. There are some optimizations that can be done with MotionFormat, such as allowing users to set presets and tune to balance between video size and render speed. However, I will handle these changes in another pull request.
(I think) I have improved some things:
- Removing the storage of each ui.Image as it is no longer needed https://github.com/polarby/render/blob/29ec060bc4b40a9d602a368fcdbcaafcb28d7a8e/lib/src/capturer.dart#L29
- Removing
simultaneousCaptureHandlers
now seems to be obsolete, as piping can be simultaneously
I couldn't help but notice, that the piping process sometimes (almost randomly) takes significantly longer than storing each frame - the old process approach. Although piping should be the better way, I cannot see a major improvement...?! For web on the other hand this feature might be essential?! I am not too sure if it is good to merge this for the whole plugin, as although the process is now cleaner, it takes longer most of the time. Let's discuss it! How can we improve this?! You said you noticed this too, right?
Yes, there was no major improvement because it's a trade-off between capturing and handling images in parallel, and then batch processing them at the final stage (old approach). On the other hand, the new approach involves piping images in a queue, converting them, and processing them simultaneously in ffmpeg. I believe that for short durations, there may not be a significant impact, but for recording over a long period, it can help save memory, storage, and time as well.
Regarding the web, I have conducted some research but have not yet found any way to implement piping.
Let's keep this in mind though! Although web such as using ffmpeg_wasm might not require piping, long-period rendering might require this. If we could separate normal capturing under let's say 5-10min and longer period capturing + recording, this reasonable and working solution may be merged.
@tienthanh1993 Thinking about giving your piping contribution a new chance in life. Check out #6
@polarby I wanted to check if piping resolves some RAM issues I have on older devices such as the iPhone 8. I cannot get this PR to run, though. For some reason, the closing of the IOSink
(https://github.com/polarby/render/pull/15/files#diff-e4e775396876f43bd88ff1f2de5bfced24f4d39a9b582f880231095ee5f68a25R317) results in very weird behaviour. Nothing after that referenced line is executed. It doesn't throw an error or whatsoever. Can you check if this works for you?