tpp-mlir
tpp-mlir copied to clipboard
TPP experimentation on MLIR for linear algebra
To enable running softmax with TPPs we need more operations: 1. max/sum reduce op (%2 and %8) 2. sub operation (%5) We needs to support broadcast semantics in sub or...
The snippet taken from `xsmm-quarternary-bf16.mlir` test: ``` func.func @entry(%arg0: memref, %arg1: memref, %arg2: memref, %arg3: memref) { %c16_i64 = arith.constant 16 : i64 %func = xsmm.fused_brgemm.dispatch [4, 4, 4, 4,...
[RunnerUtils.cpp](https://github.com/llvm/llvm-project/blob/main/mlir/lib/ExecutionEngine/RunnerUtils.cpp#L212) already has verifiers that we can use for equality. Instead of adding a new dialect, I think we just need a local utility builder that can lower to the...
The current [implementation](https://github.com/plaidml/tpp-mlir/blob/main/tpp-run/MLIRBench.cpp#L97) replaces dense tensors with random values, but this is restricted to `tpp-run`. For `tpp-opt` tests, we can't use that, and we end up using dense tensors, and...
As exposed in #492, TPP matchers all have asserts to make sure the number of ops is correct. This is a problem because: 1. It is poor software engineering practice...
The pass `conv-simplify` moves the bias add to the tensor initialization of a convolution if it's a zero-splat. This is common in matmul networks too, so we should make that...
Could you please provide an ability to generate and install CMake package configuration files? These files allow other CMake projects to find and use your project as a dependency. Please...
We have recently merged the [target descriptor support](https://github.com/llvm/llvm-project/pull/92138) in MLIR, and a number of comments were made that were left for follow up PRs. Below is a list (feel free...