tfjs
tfjs copied to clipboard
How to use data kept on GPU after model inference with tensorflow.js v3.19.0 with webgpu?
I went through the pure GPU pipeline example provided by @lina128 for the webgl backend. Is there any example for webgpu backend as well? I wanted to use the inference result from my tensorflow.js code to render on canvas directly accessing the buffer returned from the dataToGPU call from the tensor.
@xhcao implemented webgpu's dataToGPU
. Can you take a look and give an example based on Na's demo for webgpu?
Hi, @rahul-lokesh , dataToGPU returns a WebGLTexture on WebGL backend, but returns a GPUBuffer on WebGPU backend, you could get more information from https://github.com/tensorflow/tfjs/blob/master/tfjs-core/src/tensor.ts#L397 I am trying to enable the pure GPU pipeline example on the WebGPU backend, but it will take some time.
Hi @xhcao , thanks for sharing the file containing the source for the dataToGPU call. Would be really helpful if you can share an example for the pure GPU pipeline.
related #6683
Hi, @rahul-lokesh , I had already ported the pure GPU pipeline on the webgpu backend, https://github.com/tensorflow/tfjs-examples/pull/867 Because there is an issue of exporting webgpu utilities https://github.com/tensorflow/tfjs/pull/6707, which cannot register webgpu backend successfully. If you build the local related tfjs packages, you could run the above example.
Thanks for example, @xhcao. I will try it out and let you know in case I face issues.
Hi @xhcao, I am trying to render the output from my model which is a color image on the webgpu canvas. I am not a webgl or webgpu expert, so tried few modifications in the pixel shader without much success. Any suggestions would be really helpful.
Hi, @rahul-lokesh, The underlying object of Tensor is a GLTexture on the webgl backend, but a GPUBuffer on the webgpu backend. Is it convenient to provide your example code here? I could help to debug it.
Hi @xhcao, please find the code below:
-
This is the init part:
const kernels = tf.getKernelsForBackend('webgpu') kernels.forEach(kernelConfig => { const newKernelConfig = { ...kernelConfig, backendName: 'custom-webgpu'} tf.registerKernel(newKernelConfig); }); adapter = await navigator.gpu.requestAdapter() device = await adapter.requestDevice() canvas = document.getElementById('myCanvas') tf.registerBackend('custom-webgpu', () => new WebGPUBackend(device)); await tf.setBackend('custom-webgpu') await tf.ready() ctx = canvas.getContext('webgpu') const presentationFormat = navigator.gpu.getPreferredCanvasFormat() const presentationSize = [ 256, 256, ] ctx.configure({ device, size: presentationSize, format: presentationFormat, alphaMode: 'opaque', }) pipeline = device.createRenderPipeline({ layout: 'auto', vertex: { module: device.createShaderModule({ code: VERTEX_SHADER, }), entryPoint: 'main', }, fragment: { module: device.createShaderModule({ code: PIXEL_SHADER, }), entryPoint: 'main', targets: [ { format: presentationFormat, }, ], }, primitive: { topology: 'triangle-list', }, }) sampler = device.createSampler({ magFilter: 'linear', minFilter: 'linear', }) sizeParams = { width: 256, height: 256, } sizeParamBuffer = device.createBuffer({ size: 2 * Int32Array.BYTES_PER_ELEMENT, usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST, }) device.queue.writeBuffer(sizeParamBuffer, 0, new Int32Array([sizeParams.width, sizeParams.height]))`
-
This is the part post obtaining the output from the model:
const out_img_data = await convert_tensor_to_data(out_denorm) // Output image data has the GPU Buffer for a RGBA images with values between 0 to 1 const uniformBindGroup = device.createBindGroup({ layout: pipeline.getBindGroupLayout(0), entries: [ { binding: 1, resource: { buffer: out_img_data.buffer, } }, { binding: 2, resource: { buffer: sizeParamBuffer, } } ] }) const commandEncoder = device.createCommandEncoder() const textureView = ctx.getCurrentTexture().createView() const renderPassDescriptor = { colorAttachments: [ { view: textureView, clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 }, loadOp: 'clear', storeOp: 'store', }, ], } const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor) passEncoder.setPipeline(pipeline) passEncoder.setBindGroup(0, uniformBindGroup) passEncoder.draw(6, 1, 0, 0) passEncoder.end() device.queue.submit([commandEncoder.finish()])
-
Shader code:
export const VERTEX_SHADER = struct VertexOutput { @builtin(position) Position : vec4<f32>, @location(0) fragUV : vec2<f32>, } @vertex fn main(@builtin(vertex_index) VertexIndex : u32) -> VertexOutput { var pos = array<vec2<f32>, 6>( vec2<f32>( 1.0, 1.0), vec2<f32>( 1.0, -1.0), vec2<f32>(-1.0, -1.0), vec2<f32>( 1.0, 1.0), vec2<f32>(-1.0, -1.0), vec2<f32>(-1.0, 1.0) ); var uv = array<vec2<f32>, 6>( vec2<f32>(1.0, 0.0), vec2<f32>(1.0, 1.0), vec2<f32>(0.0, 1.0), vec2<f32>(1.0, 0.0), vec2<f32>(0.0, 1.0), vec2<f32>(0.0, 0.0) ); var output : VertexOutput; output.Position = vec4<f32>(pos[VertexIndex], 0.0, 1.0); output.fragUV = uv[VertexIndex]; return output; } export const PIXEL_SHADER = struct SizeParams { width : i32, height : i32, } @group(0) @binding(1) var<storage, read_write> buf : array<vec4<f32>>; @group(0) @binding(2) var<uniform> size : SizeParams; @fragment fn main(@location(0) fragUV : vec2<f32>) -> @location(0) vec4<f32> { let coord = vec2(fragUV.x, fragUV.y); let rowCol = vec2<i32>(i32(coord.y * f32(size.height)), i32(coord.x * f32(size.width))); let color = (buf[rowCol.x * size.width + rowCol.y]).rgb; // Not sure if this is right var out_color : vec4<f32>; let purple = vec4<f32>(color, 1.0); return out_color; }
Any suggestions would be really helpful
Hi, @rahul-lokesh , I did not find the problem from your code.
- Could you describe what is the wrong result and the expected one? Is drawing incorrect, or does it draw nothing?
- You could view the console log to check whether there is an error, Right-click -> Inspect to see console log.
- Is
convert_tensor_to_data
custom function, and its return is dataToGPU result? Is the GPUBuffer size is equal with the canvas size? - If possible, could you provide the full code of your example?
Hi @xhcao ,
- Currently I don't see any drawing on canvas. The model output is correct because I tried visualizing the custom webgpu backend results on 2d canvas where the data is transferred back to the CPU using the .data() function. I had implemented on custom webgl backend and rendered the output on webgl2 canvas and it was working fine, so followed similar steps to use custom webgpu backend.
- I monitor the console logs but I don't see any errors. We'll not be able to check the errors in shader code right?
- Yes, convert_tensor_to_data is a custom function: async function convert_tensor_to_data(tensor) { const data = await tensor.dataToGPU() return data } The GPU size is 2562564*4, where height and width of the image is 256 with RGBA channels.
- Adding complete code will result in more confusion because I have some additional stuff to feed the input to the model. Let me know if you require any specific code you want me to add. I'll be happy to post it here.
Hi, @rahul-lokesh , there is a problem in your pixel shader,
original shader
var out_color : vec4<f32>; let purple = vec4<f32>(color, 1.0); return out_color; }
I think it should be as shown below,
var out_color : vec4<f32>; out_color = vec4<f32>(color, 1.0); return out_color;
I had merged all your provided code in my example, and fixed the above shader issue, I could see the right result on the canvas. If you still get nothing in the canvas, please check whether all values of the model predicting result are zero, use https://js.tensorflow.org/api/latest/#tf.Tensor.data to download data and then print data.
Hi @xhcao , the shader code typo was introduced while I was doing some debugging. I found the issue between, it was not related to shader code though. Thanks for providing the example and your support to debug the issue.
hi @xhcao
I tried to use the method you're showing in the example but not having much luck. I tried to incorporate the changes that @rahul-lokesh mentioned above but that didn't solve the problem either.
This is what I get when using standard toPixels
and placing things on canvas after getting the inference output from the model (RGB image).
and this is what I get when using the
pure GPU
approach
I see two major problems
- there are tons of errors in the console (also from model inference - zero errors before), plus the result from inference is a bit different vs using standard 'webgpu' backend
- image is kinda repeated - is this a shader issue?
I uploaded the code in case you want to take a look https://img-cut.aishoot.co/puregpu.html LMK if I can provide any more info 🤔
Close this issue since the example has been provided. Please file a new one if you meet other problems.