drishti
drishti copied to clipboard
support YUV12 to RGB conversion in OpenGL
Hello, I have managed to get facefilter app running on Ubuntu, however it is emitting "YUV not supported" messages.
Is this feature only of facefilter Qt app or a feature of drishti in general?
I know, drishti in a lib for mobile, but I am developing a cross-platform Qt app, both for mobile & desktop.
Is it possible to somehow plug-in YUV->RGB conversion into GL pipeline?
Searching for "opengl yuv to rgb" emits some results: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c however I have zero experience in OpenGL, can't implement this by myself
Thanks
p.s. or YV12<->NV12 GL conversion (camera supports YV12)
Hmmm. I haven't tried facefilter on Ubuntu, but it would be nice to support that. I have run it on OS X platforms (primarily for debugging). @ruslo has looked at this most recently on Ubuntu for QT 5.9 (https://github.com/elucideye/drishti/issues/511). Any help getting this running is appreciated. I think we should be able to get it working (possibly with Qt 5.5. I'll have to review notes and can take a look more this weekend. BTW, there is a lighter weight drishti-hci
app which exercises the same pipeline without QT. It uses a bare bones aglet
lib for portable OpenGL context creation (w/ glfw
based preview on Ubuntu), but the video interface for Ubuntu is limited to lists of still images.
I know, drishti in a lib for mobile, but I am developing a cross-platform Qt app, both for mobile & desktop.
The libraries (including minimal GLSL processing) are geared towards mobile platforms, but should run on desktop systems (primarily for development and testing) as well. We build and run all the libs in CI tests on Ubuntu (Trusty), OS X and Windows. See .travis.yml
and .appveyor.yml
.
however it is emitting "YUV not supported" messages.
If you post a log of the errors with additional context I can look at it more this weekend? How far does it get before reporting the error? Is the camera instantiated OK?
Is it possible to somehow plug-in YUV->RGB conversion into GL pipeline?
Yes. Drishti currently uses the hunter-packages/ogles_gpgpu
library (extended fork of an original project by Marcus Konrad) for the cross platform GPGPU pipeline. I've added the NV21 conversion shaders (via GPUImage) here https://github.com/hunter-packages/ogles_gpgpu/blob/hunter/ogles_gpgpu/common/proc/yuv2rgb.h for this task.
See the unit test for an example: https://github.com/hunter-packages/ogles_gpgpu/blob/75c13959d983442c4b049080b62bdffdcf01128c/ogles_gpgpu/ut/test-ogles_gpgpu.cpp#L441-L533
These are currently used in facefilter
for the NV21 iOS video feed, so it should be fairly easy to use that if the video feed is working properly.
Unfortunately, the QT facefilter stuff is excluded from CI builds due to the excessive build times with the optional QT package, so facefilter
+ ubuntu
is probably the least supported corner of the project. Sorry about that. I'm interested in getting it working reliably, however, and potentially adding a lighter weight SFML demo application to cover both mobile and desktop platforms, which should be more CI friendly.
If you post a log of the errors with additional context I can look at it more this weekend? How far does it get before reporting the error? Is the camera instantiated OK?
here is log:
...
[20:58:45.227 | thread:14661 | facefilter | info]: settings: 160 x 120 : 4
[20:58:45.228 | thread:14661 | facefilter | info]: device: /dev/video1
[20:58:45.228 | thread:14661 | facefilter | info]: description: UVC Camera (046d:081d)
[20:58:45.228 | thread:14661 | facefilter | info]: resolution: 1280 960
[20:58:45.228 | thread:14661 | drishti | info]: FrameHandlerManager #################################################################
Unable to set the parameter value: the parameter is not supported.
Unable to set the parameter value: the parameter is not supported.
GL_VENDOR: NVIDIA Corporation
[20:58:46.527 | thread:14686 | drishti | info]: DJH: settings ################### false, 0,01
[20:58:46.527 | thread:14686 | drishti | info]: FaceFinder::init2() #################################################################
[20:58:46.527 | thread:14686 | drishti | info]: {
faceDetector drishti_face_gray_64x64.cpb
faceRegressor drishti_full_face_model.cpb
eyeRegressor drishti_full_eye_model.cpb
faceDetectorMean drishti_face_gray_64x64_mean.json
}
[20:58:46.599 | thread:14686 | drishti | info]: Init painter
YUV data is not supported
YUV data is not supported
YUV data is not supported
YUV data is not supported
YUV data is not supported
YUV data is not supported
...
i.e. basically no errors, camera starts fine, I see myself, however the problem is that Ubuntu (or maybe my camera) outputs video in YV12 format not NV12.
I've added the NV12 conversion shaders (via GPUImage) here https://github.com/hunter-packages/ogles_gpgpu/blob/hunter/ogles_gpgpu/common/proc/yuv2rgb.h for this task.
thanks, I see it is possible to adapt that code to YV12, the difference is just byte order: http://blog.csdn.net/fanbird2008/article/details/8232673
It sounds like you are on the right path. For reference, the initial check is here:
https://github.com/elucideye/drishti/blob/b6360c37a22ea1efe0581ce8832ee9e3d06a9b75/src/app/qt/facefilter/VideoFilterRunnable.cpp#L215-L219
Which is implemented here:
https://github.com/elucideye/drishti/blob/b6360c37a22ea1efe0581ce8832ee9e3d06a9b75/src/app/qt/facefilter/VideoFilterRunnable.cpp#L243-L254
Another succinct format description: https://stackoverflow.com/a/42240862
YV12 8 bit Y plane followed by 8 bit 2x2 subsampled V and U planes. So a single frame will have a full size Y plane followed 1/4th size V and U planes.
NV21 8-bit Y plane followed by an interleaved V/U plane with 2x2 subsampling. So a single frame will have a full size Y plane followed V and U in a 8 bit by bit blocks.
For YV12 we would need to modify this bit of code:
https://github.com/hunter-packages/ogles_gpgpu/blob/75c13959d983442c4b049080b62bdffdcf01128c/ogles_gpgpu/common/proc/yuv2rgb.cpp#L232-L247
Since the U
and V
pixels are in separate planes it would be something like this:
void Yuv2RgbProc::filterRenderPrepare() {
shader->use();
// set the viewport
glViewport(0, 0, outFrameW, outFrameH);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, luminanceTexture);
glUniform1i(yuvConversionLuminanceTextureUniform, 4);
if(m_isInterleaved) // i.e., NV12 or NV21
{
glActiveTexture(GL_TEXTURE5);
glBindTexture(GL_TEXTURE_2D, chrominanceTexture);
glUniform1i(yuvConversionChrominanceTextureUniform, 5);
}
else // is planar
{
glActiveTexture(GL_TEXTURE5);
glBindTexture(GL_TEXTURE_2D, uTexture);
glUniform1i(yuvConversionUTextureUniform, 5);
glActiveTexture(GL_TEXTURE6);
glBindTexture(GL_TEXTURE_2D, vTexture);
glUniform1i(yuvConversionVTextureUniform, 6);
}
glUniformMatrix3fv(yuvConversionMatrixUniform, 1, GL_FALSE, _preferredConversion);
}
With corresponding updates in the shaders.
Are you planning to add this?
thanks, I understand now how to do this.
Are you planning to add this?
I have to integrate drishti core& hci into my app (dlib) first, right now it works on desktop, but way too slow on mobile -- I guess this will take 3-4 weeks time. Afterwards I will try to make the code uniform across mobile/desktop.
Unless you need this feature earlier (3-4 weeks), I can do it.
FYI: PR for YUV12 here https://github.com/hunter-packages/ogles_gpgpu/pull/30 (The code was more or less added in the above comment so it was easy to implement: https://github.com/elucideye/drishti/issues/615#issuecomment-348631002). After this we can:
- [x] tag a release in ogles_gpgpu
- [x] update hunter so it knows about it
- [x] update the hunter version in drishti/drishti-upload
- [x] update drishti-upload in drishti
In the near future we can probably collapse the last two steps into a single step as discussed here #608