Yi Zhang
Yi Zhang
I am not able to reproduce this locally or through grinder. It would be very helpful if we could get some coredumps for the failure from nightly build.
@Mesbah-Alam could you help adding the following to the above a failing tests `-Xdump:system+java:events=throw,filter=java/lang/NullPointerException#*java/lang/invoke/BruteArgumentMoverHandle.permuteArguments*`
I had a local patch from a long time ago https://github.com/cathyzhyi/mlir-npcomp/commit/556f6244bd2e0e785179d4b267fcc7e06c6d7d22. The codebase has diverged a lot since then but might be used as a reference.
> @cathyzhyi I have added a PR for this decomposition in functorch: [pytorch/functorch#525](https://github.com/pytorch/functorch/pull/525) So I guess having the decomposition here also will duplicate it. @gprateek93 After the decomposition is merged...
Canonicalizer is an optimization which is not guarantee to run. We would need the conversion pattern for correctness so it can't be deleted. Why do we need the folder?
Actually, don't we only accept the `padding` as constant int list? Why do we need the `getDynamicZeroPaddedTensor` API?
> > Actually, don't we only accept the `padding` as constant int list? Why do we need the `getDynamicZeroPaddedTensor` API? > > The calculation for the correct padding, `gradInputPaddingValues`, requires...
@silvasean I submitted a workaround in https://github.com/llvm/torch-mlir/pull/843
> python scalars go in, python scalars come out So `torch.ops.aten.ceil(1.5)` should expect scalar output rather than Tensor right?
I see. Thanks for confirming. Seems `torch.ops.aten.ceil(1.5)` still returns a Tensor even with that fix. Let me open another issue upstream.