Soren Lassen
Soren Lassen
fwiw, the f32->bf16 rounding versus truncation question was also raised in issue #3876
in the lit test onnx_canonicalization.mlir we have a value_ints example: https://github.com/onnx/onnx-mlir/blob/main/test/mlir/onnx/onnx_canonicalization.mlir#LL627-L635 if I put that in a standalone test_constant_3.mlir file: ``` func.func @test_constant_3() -> tensor { %0 = onnx.Constant {value_ints...
maybe we should canonicalize before the first run of shape inference I checked that the following change doesn't break any lit tests: ``` diff --git a/src/Compiler/CompilerPasses.cpp b/src/Compiler/CompilerPasses.cpp index 5bcf5c70..7e0cd950 100644...
I tried adding canonicalizing before shape inference with the patch in the previous message, on top of the fixes in PR #2232 and now `onnx-mlir gpt2-dataprop.onnx` fails with these messages:...
> Will we have the problem in the new hybrid transformation? good question `onnx-mlir --onnx-hybrid-pass gpt2-dataprop.onnx` doesn't crash, even without the extra canonicalizer pass, but prints ``` loc("Constant_12"): error: Require...
@chentong319 if you want to discuss in person I can meet Friday morning March 17 anytime between 8-11am PT if you're available then
> Does any time next Monday work for you? yes, I'm available anytime after 1 PT on Monday
this PR was completed in PR #2098
@tungld I created a draft PR with a custom attribute DisposableElementsAttr with garbage collection and a few other features to optimize constant propagation: https://github.com/sorenlassen/onnx-mlir-einsum/pull/1 (I created the PR on my...
let's continue the quest for constant folding in the new issue #2143