Stella Laurenzo
Stella Laurenzo
I think it is likely that the problem is leading up to this and we are not narrowing it to a static value. You can't dynamically switch between an expanding...
I think it is likely that the problem is leading up to this and we are not narrowing it to a static value. You can't dynamically switch between an expanding...
That IR needs some more work... One thing that jumps out to me: you've got multiple onnx.Constant ops that are just capturing i1 0d tensors as dense resources. Those are...
There's probably one use of that whole subgraph that is conditioned on some constant or input shape thing... The onnx representation is really bad for that stuff and we'll need...
Before boiling the ocean on this, I'd recommend using a third party tool to simplify the onnx graph. Example: https://github.com/daquexian/onnx-simplifier
@rsuderman Still appears to be there. Can we delete/close this issue?
I doubt it is worth getting this working through the legacy TorchScript path. Focus on either FX or onnx for testing
MHLO (which is what this repo contains) is the native IR of [JAX](https://github.com/google/jax), which is used heavily for training (on both CPU/GPU/TPU via XLA). However, "training" can mean many things....
Hm, this seems odd to me because I run like this every day. If possible, I'd like to figure out how to repro what you are seeing so we have...
@rsuderman looks like was marked completed but may need more work