Add decomposition for aten.flatten.using_ints
Can you review for me @silvasean @powderluv. Thanks!
@qedawkins can you help with the TorchToLinalg failures? Can we improve the lowering of ConvertAtenViewOp to support this?
@fortianyou -- We probably want to add support for a list of ops to decompose. I am happy to do that, but probably won't be able to until after next week.
@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.
@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.
I think it is specific to the decomposition in this PR.
@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.
I think it is specific to the decomposition in this PR.
I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular
note: see current operation: %2386 = "torch.aten.view"(%2372, %2385) : (!torch.vtensor<[?,512,1,1],f32>, !torch.list<int>) -> !torch.vtensor<[?,512],f32>
Out of curiosity can I ask why this is being moved to a decomposition?
I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular
Why is control flow needed?
I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular
Why is control flow needed?
I'm counting asserts to make sure the shape arguments are valid (e.g. (2, 3) can't be viewed as (3, 3)) as control flow. Also, to avoid actual control flow for the dynamic cases I think we have to just greedily flatten then expand (e.g. [?, ?, ?] -> [?, ?] = [?, ?, ?] -> [?] -> [?, ?]). LMK if you have another suggestion.
I am happy to do that, but probably won't be able to until after next week.
Thanks very much, @silvasean.
Out of curiosity can I ask why this is being moved to a decomposition?
@qedawkins We want to support aten.flatten.using_ints lowering in another pass like TorchToMhlo(fully dynamic shape). And it's moved to decomposition pass because it's a view like op. With the decomposition, multiple backends will benefit from it.
@qedawkins We want to support
aten.flatten.using_intslowering in another pass like TorchToMhlo(fully dynamic shape). And it's moved to decomposition pass because it's a view like op. With the decomposition, multiple backends will benefit from it.
I see, that makes sense. I opened a PR that should solve the view problems you're having: https://github.com/llvm/torch-mlir/pull/1168
Thanks, @silvasean @qedawkins. After rebasing and some modification to the backend configurations all the ut passed. cc @ZihengJiang @Vremold we can go on to upstream the resnet18 example :D