feiyuvl
Results
3
issues of
feiyuvl
Will onnx-mlir support using naive cuda kernel to write operator kernels?
enhancement
Is there any plan to support cudnn/cublas call for convolution、dot computation?
I write a simple test to get the triton code of `WeightOnlyInt8Linear`,the test code is as follows: ``` import torch import torch.nn as nn import torch.nn.functional as F class WeightOnlyInt8Linear(torch.nn.Module):...