executorch
executorch copied to clipboard
Support eq.Scalar
It's reported from Discord and this pattern fails to lower to QNN
class EqualFromInplaceCopyDecomp(torch.nn.Module):
def __init__(self, hidden_size=4):
super().__init__()
# a small state tensor
self.register_buffer("h", torch.zeros((1, hidden_size)))
def forward(self, x):
self.h[0] = x
return self.h[0]
Differential Revision: D86891707
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15792
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:x: 1 New Failure
As of commit 8d9c9af076faea8bdf1ada319053febb3e462d11 with merge base 8ab589a692e3fdd3995b8ee59cb399355f479b2e ():
NEW FAILURE - The following job has failed:
- Lint / lintrunner / linux-job (gh)
>>> Lint for backends/qualcomm/tests/test_qnn_delegate.py:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D86891707.
This PR needs a release notes: label
If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.
To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"
For more information, see https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.
@haowhsu-quic @winskuo-quic @shewu-quic @DannyYuyang-quic Hi team, I might need some help for this pattern. I managed to lower the pattern to fp, but it's failing with the quantized version. I think a copy_ node shows up in the graph indicating it's a mutable buffer. Do you know the best way moving forward with this pattern?
@haowhsu-quic @winskuo-quic @shewu-quic @DannyYuyang-quic Hi team, I might need some help for this pattern. I managed to lower the pattern to fp, but it's failing with the quantized version. I think a copy_ node shows up in the graph indicating it's a mutable buffer. Do you know the best way moving forward with this pattern?
Hi @cccclai,
I am able to reproduce the issue where I get the following graph finalize error message:
No graph inputs present for graph [0]
I believe this has to do with the behavior of run_decomposition here: https://github.com/pytorch/executorch/blob/b433278ec47a2ff9274fc456b62a0511a941dee7/exir/program/_program.py#L1176
The input node is actually not connected to the graph after run_decomposition for quantization flow, as shown in the image below. The copy_ node also disappeared.
On the other hand, floating point flow is working properly because the input node is still connected to the graph after run_decomposition.