AMDMIGraphX
AMDMIGraphX copied to clipboard
AMD's graph optimization engine.
Increase performance of layernorm for unet model by ~ 33%: tensor {float, 2, 32, 81920} was evaluated as a test case.
rocblas, miopen etc should not be static dependencies, as they are currently, rather, they should be loaded dynamically, if those backends are enabled (or not-disabled). If a backend is not...
https://github.com/ROCm/AMDMIGraphX/pull/3010/files#r1648254890 With #3010 MIGraphX can fuse pointwise inputs for the Dot/conv instruction for MLIR. It is not handling Reshapes that happen on pointwise instruction inputs. We can add those.
It should be possible to execute an migraphx compiled model, without needing a full migraphx installation, in particular without needing the machinery that parses onnx files, compiles models, etc... This...
It should be possible to enable run-time accuracy debugging, i.e. inspection of values of a tensor, for the purpose of detecting 0s, or NaNs, or any other user-specified condition, for...
Description Checklist - [ ] Get reference model - [ ] Model uploaded to nas mount - [ ] Capture model for Performance review - [ ] Capture TensorRT execution...
https://github.com/ROCm/AMDMIGraphX/pull/3081/files#r1638157103
Currently, MIGraphX only uses Find-2.0 API of the MIOpen for the "Convolution" operators. One of the key optimization of the MIGraphX is the Fusion and for that it makes use...
Viewing code in Onnxruntime for IsUnsupportedOpMode() before we compile in a model Seeing cases which are handled in MIGraphX correctly but are still marked as not supported in Onnxruntime -...
``` module: "mlir_main:pointwise157" mlir_main:pointwise157:@0 = @literal{0.125} -> half_type, {1}, {0} mlir_main:pointwise157:y1 = @param:y1 -> half_type, {1, 1, 2304}, {2304, 2304, 1} mlir_main:pointwise157:y0 = @param:y0 -> half_type, {1, 1, 2304}, {2304,...