TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

🐛 [Bug] Implement dynamic batch and dynamic shapes support for layer norm converter

Open peri044 opened this issue 1 year ago • 1 comments

Bug Description

Implement dynamic batch and dynamic shapes support for layer norm converter. Add the following testcase once it is implemented

def test_layernorm_with_dynamic_shape(self):
        class LayerNorm(torch.nn.Module):
            def forward(self, x):
                return torch.ops.aten.layer_norm.default(
                    x,
                    torch.tensor([3, 224, 224]),
                    torch.ones((3, 224, 224)),
                    torch.zeros((3, 224, 224)),
                    1e-05,
                    True,
                )

        input_specs = [
            Input(
                shape=(-1, 3, 224, 224),
                dtype=torch.float32,
                shape_ranges=[((1, 3, 224, 224), (1, 3, 224, 224), (2, 3, 224, 224))],
            ),
        ]

        self.run_test_with_dynamic_shape(
            LayerNorm(),
            input_specs,
        )

To Reproduce

Steps to reproduce the behavior:

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

peri044 avatar Apr 12 '24 19:04 peri044

any update? can layernorm support dynamic shape now?

Feynman1999 avatar Jun 03 '24 10:06 Feynman1999