NeMo icon indicating copy to clipboard operation
NeMo copied to clipboard

Converting FastPitch to TensorRT engine.

Open PatrykNeubauer opened this issue 2 years ago • 3 comments

I wanted to convert my TTS system with TensorRT, HifiGAN models weren't problematic at all, but I've been unable to get the TensorRT engine from Fastpitch models.

I've exported both the HifiGAN and FastPitch models to ONNX, thanks to Exportable.export() method, but the FastPitch ones fail to even get parsed by the ONNX parser from TensorRT.

That's the error I get:

[03/22/2022-09:23:02] [TRT] [E] [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_1126: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
[03/22/2022-09:23:02] [TRT] [E] parsers/onnx/ModelImporter.cpp:780: While parsing node number 1137 [Pad -> "1682"]:
[03/22/2022-09:23:02] [TRT] [E] parsers/onnx/ModelImporter.cpp:781: --- Begin node ---
[03/22/2022-09:23:02] [TRT] [E] parsers/onnx/ModelImporter.cpp:782: input: "1655"
input: "1680"
input: "1681"
output: "1682"
name: "Pad_1137"
op_type: "Pad"
attribute {
  name: "mode"
  s: "constant"
  type: STRING
}

[03/22/2022-09:23:02] [TRT] [E] parsers/onnx/ModelImporter.cpp:783: --- End node ---
[03/22/2022-09:23:02] [TRT] [E] parsers/onnx/ModelImporter.cpp:785: ERROR: parsers/onnx/ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_1137
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_1126: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
In node 1137 (parseGraph): INVALID_NODE: Invalid Node - Pad_1137
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_1126: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])

So it clearly seems like the TensorRT can't handle that particular padding. I've tried using different ONNX Opset versions (11, 12, 13), tried converting the models from INT64 to INT32 myself, since I suspected the problem might lie in the fact that TensorRT doesn't natively support INT64 and it was attempting to cast the model down to INT32 itself, but nothing seems to work.

The machine I'm working on has CUDA 11.4 which I can't change, so I'm using TensorRT v.8.2.2.1.

Has anyone been successful in using TensorRT with FastPitch models or are they just not compatible? So far it seems to be a problem specific to FastPitch only.

If someone succeeded, was there a need to change the model - e.g. the paddings? Or if it worked out of the box, what's NeMo/ONNX opset/TensorRT versions combination to use?

PatrykNeubauer avatar Mar 22 '22 10:03 PatrykNeubauer

Same problem with TensorRT 8.4.0.6 and CUDA 11.4.

yuananf avatar Mar 24 '22 12:03 yuananf

Can you share your trtexec command you are using for Fastpitch onnx to TensorRT conversion?

JayZ92-99 avatar Aug 30 '22 06:08 JayZ92-99

Can you share your trtexec command you are using for Fastpitch onnx to TensorRT conversion?

It's been so long that I can't unravel what options we used exactly, but we tried most things mentioned in these docs.

I also remember trying to do it through Python API, nothing worked for FastPitch in the end though.

PatrykNeubauer avatar Aug 30 '22 07:08 PatrykNeubauer

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Oct 08 '22 02:10 github-actions[bot]

@borisfom Could you please share any advice here? Thanks.

XuesongYang avatar Oct 08 '22 07:10 XuesongYang

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Nov 10 '22 02:11 github-actions[bot]

This issue was closed because it has been inactive for 7 days since being marked as stale.

github-actions[bot] avatar Nov 18 '22 02:11 github-actions[bot]