Lu Fang
Lu Fang
python test/test_caffe2.py -v TestCaffe2BackendATen.test_alexnet
@onnxbot retest this please
@onnxbot retest this please
@prasanthpul yes, this is an issue. We did use names such as "gpu_0/data_0"
@linerzhang let me right a script to convert the name
We are resuming the integration, since CPU optimization becomes quite important for Meta internal use cases. For AMD CPU use case, at least there are two feasible paths: 1) integrate...
We do have triton-cpu: https://github.com/triton-lang/triton-cpu. The goal is using the Inductor (AOT mode) to 1) leverage/generate high performance kernels; 2) remove the framework overhead. The model is similar to model...
Thanks for the update (@naveenthangudu ), does this work with AOTInductor? We probably need to give it a try.
Hi @naveenthangudu , sure, we would like to see if we can adopt it for Meta's use cases.
The issue failed some internal models