Create tensor from GPUBuffer (GPUReadData)
This only works on WebGPU. TODO: refine API docs; Add more dtype. Use case:
const shape: Shape = [16];
const gpuReadData = new tf.GPUReadData(aBuffer, shape, dtype);
const a = tf.tensor1d(gpuReadData);
BUG: https://github.com/tensorflow/tfjs/issues/6232
To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.
@qjia7 @xhcao @haoyunfeix @gyagp , This is for API discussion.
The use case for this PR:
const shape: Shape = [16];
const gpuReadData = new tf.GPUReadData(gpuReadBuffer, shape, dtype);
const a = tf.tensor1d(gpuReadData);
API:
export function tensor1d(
values: TensorLike1D|GPUReadData, shape?: number[],
dtype?: DataType): Tensor1D {
export class GPUReadData {
readonly buffer: GPUBuffer;
readonly shape: number[];
readonly dtype: DataType;
constructor(buffer: GPUBuffer, shape: number[], dtype?: DataType) {
this.buffer = buffer;
this.shape = shape;
this.dtype = dtype || 'float32';
}
}
In another PR https://github.com/tensorflow/tfjs/pull/6684, the use case is as below:
const a = tf.tensor1d(gpuReadBuffer, 'float32', shape);
API:
export function tensor1d(
values: TensorLike1D|GPUBuffer, shape?: number[],
dtype?: DataType): Tensor1D {
Some initial opinions for this PR.
Currently, the GPUData returned from dataToGPU is as below:
export interface GPUData {
tensorRef: Tensor;
texture?: WebGLTexture;
buffer?: GPUBuffer;
texShape?: [number, number];
bufSize?: number;
}
I am wondering how about refactoring it like below (however, it breaks the usage of webgl for dataToGPU ):
export interface GPUTextureData {
texture: WebGLTexture;
texShape: [number, number];
}
export interface GPUBufferData {
buffer: GPUBuffer;
size: number;
type: string;
}
export interface GPUData {
tensorRef: Tensor;
textureData?: GPUTextureData;
bufferData?: GPUBufferData;
}
Then the interface of create tensor from gpu data is like below:
- Option 1: Reuse the current interface
export function tensor<R extends Rank>(values: TensorLike|GPUBufferData|GPUTextureData, shape?: ShapeMap[R], dtype?: DataType): Tensor<R>
In this interface, it just extends the current tensor creation function to also support GPU data (GPUBufferData|GPUTextureData) as input.
For implementation, maybe we can reuse the current function makeTensor write.
A small issue is that if the input is GPUBufferData|GPUTextureData, we can't infer the shape from values and dtype with the current rules. Maybe we can provide a default rule for GPU resource to get a shape or it's mandatory to provide the shape when values' type are GPUBufferData|GPUTextureData.
- Option 2: Add a new interface
export function tensorGPU<R extends Rank>(values: GPUBufferData|GPUTextureData, shape: ShapeMap[R], dtype?: DataType): Tensor<R>
@pyu10055 @lina128 How do you think? (Personally, I think option 1 is ok since we don't need to add a new interface.)
This change also involves dataToGPU. But that API has been released for WebGL. So I don't know whether we still can change the returned value's structure or keep webgl as it is. But I think it will be great if we can keep consistent data structure for dataToGPU and create tensor from GPU.
BTW, @gyagp and I have some offline discussions about dataToGPU and find some issues. @gyagp will provide more details description about them.
Now that the API has been defined in https://github.com/tensorflow/tfjs/pull/6853, I will close this and create a new PR for WebGPU.