Alexandre Eichenberger
Alexandre Eichenberger
@tungld yes, that would work, esp if we don't try to do symbolic computation. One benefit of the `shapeInference` is that we can hide the `cast` by defining that we...
That is one way to do it, apparently some folks in MLIR proposed/attempted to do that. An alternative would be to add an "identity" op that can carry the dynamic...
@jpienaar Thanks for your feedback, imagining how busy you guys are, much appreciated. Interesting info on the `shape.with_shape(shape_val, val) -> shape` If you were to add a variadic optional dim...
I have long advocating only using 64 bit integers. There are no reasons to limit ourselves to 32 bits, and that way this entire problem is avoided. I would change...
@tungld is this issue resolved? Has the transition to int64 been done (if that is the preferred solution)
assigning @chentong319 as he wrote this pass, if I recall correctly
@XuZheng2000 are you sure you did not ment to address this question to the MLIR community? Here we are using MLIR to lower ONNX graphs. We inherit all properties from...
> Would be good to add some tests for this to avoid similar bugs/regressions in the future would a lit test catch this?
Hello, Let me first indicate our intentions, and then we can discuss how your use may fit into this project. In this project, our ONNX dialect goals is 2 folds....
Yes, it is possible; you will need to write some rules on how to change ONNX operation (e.g. such as convolution) to operations that represent the CUDA DNN equivalent call....