Mark Mingo
Mark Mingo
### Feature request Hello, I am exporting the [OpenAI Whisper-large0v3](https://huggingface.co/openai/whisper-large-v3) to ONNX and see it exports several files, most importantly in this case encoder (encoder_model.onnx & encoder_model.onnx.data) and decoder (decoder_model.onnx,...
I am looking for a way to either benchmark the .pte files performance, the final state of the ExecutorchProgramManager object, or similar after following [this](https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html) tutorial. I used the PyTorch...
Hi, I am having trouble finding solid documentation that explains how to do the following with executorch (stable): - Load in the exported .pte model - Run inference with images...