srcarroll

Results 14 comments of srcarroll

I didn't do any partial bufferization before hand. The `bufferization` ops are a side effect of a different transformation. Moreover, the same pass works on the most recent `tf-opt` in...

> I would be interested to see what transformation generates these ops The transformation is a custom pass we have. We certainly can try to avoid generating the `to_memref`/`to_tensor` ops....

Oh my bad. It's not the exact same. It's `tf-opt --one-shot-bufferize="allow-unknown-ops"` that works. But `mlir-opt --one-shot-bufferize="allow-unknown-ops"` does not and still gives assertion error. Sorry for the confusion. The former does...

Btw without the `allow-unknown-ops`, the `tf-opt` version does throw a proper error that the op was not bufferized

> You can declare the external function with an unranked tensor type. Oh cool, I didn't know that. Thanks. > The IR that you posted is before running One-Shot Bufferize,...

If I tried to avoid bufferization ops I would have to do something like the following (semantics not verified, but should illustrate the pattern needed) ``` func.func @main(%arg0: tensor, %arg1:...

Oh never mind! I didn't know I could do the same thing with `scf.for`. Which does bufferize

so i can get onnx-mlir from `torch.pow(x, y)` with `x` and `y` mixed types. it just casts one of the inputs appropriately before applying `onnx.Pow`. you get something like ```...

so when i say `torch.pow(x, y)` compiles fine thats with `torch.onnx.export` and `dynamo=False`. so by the time we want to convert to `onnx-mlir` the graph is already doing the appropriate...

> > This can be potentially easy to resolve, but reading the specs, I have no idea how different types would be handled. Maybe converting the smaller one into the...