executorch
executorch copied to clipboard
On-device AI across mobile, embedded and edge for PyTorch
Please remove the IR being dumped on users that has no practical purpose for users, and just obscures real messages that the user should see.
Hello. I'm trying to convert below module to my backend binary. ```python import torch N_HEADS = 4 DIM = 16 MAX_SEQ_LEN = 8 def precompute_freqs_cis(dim: int, end: int, theta: float...
- Added Llama 3 8B - Added llm_manual in the list - changed name from Extensa to Cadence
Fill out the recommended `project` keys, most of which will affect the web page that PyPI will render for the `executorch` package. See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#about-your-project for the latest guidance. Use https://github.com/pytorch/pytorch/blob/a21327e0b03cc18850a0608be2d9c5bd38fd4646/setup.py#L1394...
Hello. I'm trying to use `executorch` to convert below torch module to my backend binary. And, when I called `to_backend`, below warning message was printed. ``` /home/seongwoo/.venv/lib/python3.10/site-packages/torch/export/_unlift.py:58: UserWarning: Attempted to...
Summary: Otherwise, we require LTO for decent performance for this thing in kernels. Reviewed By: kimishpatel, manuelcandales Differential Revision: D56503573
1. Downloaded doc.zip from latest release branch build: https://github.com/pytorch/executorch/actions/runs/8809033712 2. Patched the version to 0.2. 3. Removed refs/ dir which we don't need. 4. C++ docs are fixed:
Summary: Hoisting loads of these constants manually speeds up the kernel a lot. Depending on the types for CTYPE and CTYPE_BIAS, the compiler might not be able to prove this...
Follow up after https://github.com/pytorch/executorch/pull/3262 that didn't actually solved the problem.. By wrapping a potentially non-compliant `isinf`/`isnan` implementations into a lambda with a defined return type Compiler should be able to...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/executorch/blob/main/.github/workflows/nightly.yml). Update the pinned pytorch hash.