camera-samples
camera-samples copied to clipboard
Difference between `MediaEncoder` and `MediaCodec` (`useMediaRecorder` option in `EncoderWrapper`)
Hey!
Sorry for asking this here, not sure if this is the right place - but what exactly does the useMediaRecorder
option in EncoderWrapper
do?
What is the difference between the MediaEncoder
and the MediaCodec
implementation? I see some difference in handling HDR formats, does one support HDR better?
Is one more efficient than the other?
Or is one simply not available with certain API levels?
Thanks!
MediaRecorder is a simpler interface for encoding videos. It handles the boilerplate threading for you in exchange for less flexibility. In most cases it's the interface you'll want to use.
MediaCodec gives you more control over when frames get encoded - see EncoderWrapper.EncoderThread. The code there is not particularly interesting. You'd use this if you have some sort of unusual real time constraints and want direct control over how to drop frames / reduce the bitrate etc.
Neither is intrinsically more efficient - that will depend on the use case. MediaCodec was added in API level 16 whereas MediaRecorder existed from the beginning, but API level 16 is old at this point anyway.
Right now only the MediaRecorder path handles audio.
Gotcha, thanks for the insights.
I'm using OpenGL (+Skia) to draw to my Camera Frames in realtime to apply filters. I can configure this pipeline to also output to the MediaRecorder Surface, but I read on some stackoverflow posts that the MediaRecorder is often a bit weird with this and a stabler approach might be to use MediaCodec. Wdyt?
Also, I need to be able to flip the Camera while recording, so it might happen at some point that different sized Frames are coming in than what it was originally configured to, but I think I'll just force resize them in my OpenGL pipeline to not confuse it... lol.
Would CameraX meet your use case needs? Per the CameraX team, you should be able to create a media3 effect for your filter and CameraX will handle the preview / encoding split. Then you won't have to care about which one to use, CameraX will choose the right configuration for you.
Unfortunately not, I've been using CameraX but it was too limiting for my use-cases.
Stuff like flipping cameras while recording, fully customizing the pipelines, etc just made it impossible to use CameraX for me.
Also I want to draw using OpenGL + Skia, as this is targeting a cross-platform environment with iOS, and on iOS the Skia drawing already works perfectly.
On Android Camera2 I got it to draw onto a SurfaceView, but I haven't quite figured out how to draw to a MediaRecorder, maybe I need to use MediaCodec for this and draw to that Surface w/ the OpenGL pipeline.
Here's my codebase: https://github.com/mrousavy/react-native-vision-camera/tree/v3 (/android/
folder)
- These are essentially my outputs
-
Here's how I pass them from the
ImageReader
to theMediaRecorder
(orientation unforunately doesn't work here)