Android-UltimateGPUImage
Android-UltimateGPUImage copied to clipboard
Is this library correct for my use-case?
So basically, I'm trying to build an app where you can colour the walls in real-time.
I was trying to do Sobel Edge Detection, get the edges from that and then floodfill in the space. I am not sure if this is the correct library i should be using.
Any help would be appreciated?
Hi guys, nice for asking ;p If u just want to use Sobel Edge Detection to modify a single picture, you can use this lib like this:
@Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity);
// prepare surface.
mRecorderViews = (FilterRecoderView) findViewById(R.id.vp_video_recorder_gl);
// prepare the filter you want to use
final GPUImageContrastFilter contrastFilter = new GPUImageContrastFilter(1.0f);
contrastFilter.setContrast(contrastValue);
// bind to PureImageManager and render
PureImageManager.init(context)
.setGLSurfaceView(mRecorderViews)
.setScaleType(GPUImage.ScaleType.CENTER_INSIDE)
.setImage(mUri)
.setFilter(contrastFilter)
.requestRender();
}
But the filter you use to deal with the input frame buffer need to be defined by yourself. There has two ready-made Sobel Edge Detection filter of deferent type, one is a pure Sobel, you can find it at
cn.co.willow.android.ultimate.gpuimage.core_render_filter.image_enhance_filter.filter_3x3_sampling.GPUImageDirectionalSobelEdgeDetectionFilter
, another is a filter group
cn.co.willow.android.ultimate.gpuimage.core_render_filter.recommend_effect_filter_group.GPUImageSobelEdgeDetection
I suggest you consult this two, especial the second one. The purpose of this lib is to deal video/audio stream with filter or staff like that.
Hey @Windsander . Thanks for the reply, i really appreciate it.
A couple of questions for you.
- If i want to define my own filters (with my own shaders), how can i do that?
- If i want to floodfill a specific frame how can i do that? I don't think we can use shaders/gpu to flood fill, so is there a way to get each frame data and do some cpu processing on that?
Also, the code you gave me.
@Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity);
// prepare surface.
mRecorderViews = (FilterRecoderView) findViewById(R.id.vp_video_recorder_gl);
// prepare the filter you want to use
final GPUImageContrastFilter contrastFilter = new GPUImageContrastFilter(1.0f);
contrastFilter.setContrast(contrastValue);
// bind to PureImageManager and render
PureImageManager.init(context)
.setGLSurfaceView(mRecorderViews)
.setScaleType(GPUImage.ScaleType.CENTER_INSIDE)
.setImage(mUri)
.setFilter(contrastFilter)
.requestRender();
}
If I want to get frames from a camera continuously, should I be using this code? Should i be calling .setImage(cameraFrameBitmap)
? Or should i be doing something else?
Thank you.
:)