Sean Silva
Sean Silva
> We definitely don't want to regress our testing of dynamic shapes for all backends due a limitation in the TOSA backend, so I would avoid changing the test. We...
Thanks for this. I put up some PR's that chip away at this model. https://github.com/llvm/torch-mlir/pull/824 https://github.com/llvm/torch-mlir/pull/825 It looks like https://github.com/llvm/torch-mlir/pull/796 will also be needed for it. I will check back...
It seems like the issue here is possibly related to multiple returns. Can you use a wrapper module that extracts the logits? See example here: https://github.com/google/iree-torch/blob/c3d7717ef4b9c83aa4870e949d9dee588e6e190d/examples/bert.py#L48 Do you need the...
@sjarus -- how have you folks been dealing with the i64's in the bert models?
@qedawkins can you confirm whether this is covered by our current e2e tests / lowering?
Yes, let's add a test for it and close this. Thanks!
Is this a dylib issue? edit: sorry, I parsed the error message wrong -- still, it seems like an issue in PyTorch itself, rather than in Torch-MLIR.
@powderluv did we fix this with the static build?
This should be pretty easy to fix by making the logic here a bit smarter so that it emits the string in multiple chunks (possibly split on each "func"): https://github.com/llvm/torch-mlir/blob/9d6ee48661cefa677950bcd262fe2824a4fa9d17/python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/shape_lib_gen.py#L1244
> Perhaps, we should follow this advice to fix it once and for all? That would be a very welcome change!