executorch
executorch copied to clipboard
On-device AI across mobile, embedded and edge for PyTorch
Summary: This diff parses the logged intermediate outputs in etdump into Inspector objects. Design doc: https://docs.google.com/document/d/1qGHsgd-roqtxPz4CrUlqGrKaAtbaf9bArziMuwHD0So/edit Differential Revision: D59614926
Summary: Change the Conv1d Tests to use to_edge_transform_and_lower This leverages and tests our new partitioner. One of the tests: `test_qs8_conv1d_with_floating_point_partitioner` was added by tarun, because wanted to validate quantized et...
Summary: X-link: https://github.com/pytorch/pytorch/pull/132525 When HOPs live out of tree, it makes it impossible to make breaking changes to the HOP API. But HOP implementations are deeply entwined with PyTorch internals....
Summary: Partitioner Configs for all the Max Ops. Max.dim is not a single node because it can also contain getitem nodes Reviewed By: digantdesai Differential Revision: D60323288
Summary: Div Mul, Elu partitioner configs This enables Elu because we can prevent its decomposition Reviewed By: digantdesai Differential Revision: D60323282
Summary: Permute Softmax Sigmoid Configurations Reviewed By: digantdesai Differential Revision: D60323286
Summary: Add Sub Sqrt and Pad Partitioner Configs Reviewed By: digantdesai Differential Revision: D60492340
Summary: Configs for Mean Min Neg Reviewed By: digantdesai Differential Revision: D60492339
Summary: Config for UpsampleBilinear2d. This lets us avoid logic for graph matching and replacement. Reviewed By: digantdesai Differential Revision: D60323281
Summary: Configurations for Prelu Pow and Slice. We can now delegate Prelu because we can prevent decomposition Reviewed By: digantdesai Differential Revision: D60492336