hexisyztem
hexisyztem
Is it possible to set parameters so that the transformed model supports longer input lengths and batch functionality?
I hope to use TensorRT for acceleration during service deployment, but I haven't found any related work online, so I would like to ask here if the ESPnet team has...
After converting the FastSpeech2 model with espnet_onnx, the audio generated by the model is distorted. Using the model: kan-bayashi/jsut_fastspeech2 Download method: python ``` from espnet_model_zoo.downloader import ModelDownloader d = ModelDownloader("~/.cache/espnet")...
Have you considered using torchDynamo instead of tracing for exporting the ONNX model? You can refer to the document below. https://pytorch.org/tutorials/beginner/onnx/export_simple_model_to_onnx_tutorial.html https://pytorch.org/docs/stable/torch.compiler_deepdive.html