AITemplate icon indicating copy to clipboard operation
AITemplate copied to clipboard

Does Concatenate order matters?

Open ecilay opened this issue 11 months ago • 1 comments

I have below code, which concats the input tensor before conv layer, this is the original corresponding Pytorch code: x = torch.nn.functional.pad(x, pad, mode="constant", value=0).

In AIT, assume the input tensor shape is (N, H, W, Cin), Conversion code 1, pad H first followed by W:

T1 = ops.full()(shape=[N, 1, W, Cin], fill_value=0)
R = ops.concatenate()(inputs=[x, T1], dim=1)
T2 = ops.full()(shape=[N, H + 1, 1, Cin], fill_value=0)
x = ops.concatenate()(inputs=[R, T2], dim=2)

and this will give different results than Conversion code 2, pad W first followed by H:

T1 = ops.full()(shape=[N, H, 1, Cin], fill_value=0)
R = ops.concatenate()(inputs=[x, T1], dim=2)
T2 = ops.full()(shape=[N, 1, W+1, Cin], fill_value=0)
x = ops.concatenate()(inputs=[R, T2], dim=1)

This code is run many times during inference. I find when I use the 1st block of code with H-first concat, the conversion results only works for H>W; the 2nd conversion with W-first concat only works for when W>H. Square inputs works with either approach. Eg dimensions to try for (H, W): (896, 1152), (1152, 896), (1024, 1024)

ecilay avatar Feb 29 '24 23:02 ecilay

thanks for your report, we will try to reproduce the issue and then get back to you.

kadeng avatar Mar 01 '24 13:03 kadeng