executorch icon indicating copy to clipboard operation
executorch copied to clipboard

Support eq.Scalar

Open cccclai opened this issue 3 weeks ago • 5 comments

It's reported from Discord and this pattern fails to lower to QNN

class EqualFromInplaceCopyDecomp(torch.nn.Module):
    def __init__(self, hidden_size=4):
        super().__init__()
        # a small state tensor
        self.register_buffer("h", torch.zeros((1, hidden_size)))

    def forward(self, x):
        self.h[0] = x
        return self.h[0]

Differential Revision: D86891707

cccclai avatar Nov 12 '25 23:11 cccclai

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15792

Note: Links to docs will display an error until the docs builds have been completed.

:x: 1 New Failure

As of commit 8d9c9af076faea8bdf1ada319053febb3e462d11 with merge base 8ab589a692e3fdd3995b8ee59cb399355f479b2e (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

pytorch-bot[bot] avatar Nov 12 '25 23:11 pytorch-bot[bot]

@cccclai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D86891707.

meta-codesync[bot] avatar Nov 12 '25 23:11 meta-codesync[bot]

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example @pytorchbot label "release notes: none"

For more information, see https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

github-actions[bot] avatar Nov 12 '25 23:11 github-actions[bot]

@haowhsu-quic @winskuo-quic @shewu-quic @DannyYuyang-quic Hi team, I might need some help for this pattern. I managed to lower the pattern to fp, but it's failing with the quantized version. I think a copy_ node shows up in the graph indicating it's a mutable buffer. Do you know the best way moving forward with this pattern?

cccclai avatar Nov 14 '25 01:11 cccclai

@haowhsu-quic @winskuo-quic @shewu-quic @DannyYuyang-quic Hi team, I might need some help for this pattern. I managed to lower the pattern to fp, but it's failing with the quantized version. I think a copy_ node shows up in the graph indicating it's a mutable buffer. Do you know the best way moving forward with this pattern?

Hi @cccclai,

I am able to reproduce the issue where I get the following graph finalize error message: No graph inputs present for graph [0]

I believe this has to do with the behavior of run_decomposition here: https://github.com/pytorch/executorch/blob/b433278ec47a2ff9274fc456b62a0511a941dee7/exir/program/_program.py#L1176

The input node is actually not connected to the graph after run_decomposition for quantization flow, as shown in the image below. The copy_ node also disappeared.

image

On the other hand, floating point flow is working properly because the input node is still connected to the graph after run_decomposition. image

winskuo-quic avatar Nov 14 '25 05:11 winskuo-quic