satoren
satoren
Also, I had to write a GLSLShader to prevent GPU -> CPU transfers to get performance on par with selfie_segmentation.
@volodymyrl > Can you please explain what I need to do to improve media pipe performance? I'm sorry can't post the code, but I hope it gives you a hint....
@volodymyrl https://codepen.io/satoren/pen/rNQXRqp How about this? This is running on the CPU and not optimized for performance as [here](https://github.com/google/mediapipe/issues/4630#issuecomment-1657373951).
You can transfer by converting to [ImageBitmap](https://developer.mozilla.org/ja/docs/Web/API/ImageBitmap)
Also, using a webworker won't make it any faster. The bottleneck for this is the transfer from GPU to CPU. Try to find an efficient way to convert from WebGLTexture...
@torinmb Perhaps a different Canvas was created for the second segment, so the gl context is also different. You can pass the canvas to the [task creation options](https://github.com/google/mediapipe/blob/ed0c8d8d8bbd466eac1e483ab62a42dd7d486e96/mediapipe/tasks/web/vision/core/vision_task_options.d.ts#L34) so that...
@volodymyrl Unfortunately I am not a person in google. But I could write [example](https://codepen.io/satoren/pen/xxQvmjv). @torinmb I hope this will be helpful to you.
@danrossi CanvasRenderingContext2D is executed on the GPU, so drawImage is also executed on the GPU. It is more efficient to blend once like your method, but not as important Your...
@danrossi > "You seem to be creating MPMask instances without invoking .close(). This leaks resources." Oh, Thank you. We needed to explicitly close if I passed the canvas. My example...
I am facing the same problem. Although not efficient, I considered converting the blob to base64 and treating it as a key, but that was impossible because the BlobEngine set...