executorch icon indicating copy to clipboard operation
executorch copied to clipboard

Augmented assignment op (+=) fails export

Open msluszniak opened this issue 6 months ago • 1 comments

🐛 Describe the bug

Op += seems to fails for tensors. Below snipped is a minimal case reconstructing the problem:

import torch
from torch import nn

from torch.export import export
from executorch.exir import EdgeCompileConfig, to_edge

class TestModel(nn.Module):
    def __init__(self):
        super().__init__()
    def forward(self, a: torch.Tensor, b: torch.Tensor, c: torch.Tensor, d: torch.Tensor) -> torch.Tensor:
        invalid_a = torch.eq(a, -1)
        valid_a = torch.eq(a, 1)
        
        # First (works)
        # b = b + c * invalid_a
        # b = b + d * valid_a
        
        # Second (works)
        # b = torch.where(invalid_a, b, b + c)
        # b = torch.where(valid_a, b, b + d)
    
        # Third (fails)
        b[invalid_a] += c
        b[valid_a] += d
        
        return b
        
        

if __name__ == "__main__":
    model = TestModel()
    example_arguments = (torch.Tensor([1, -1, 1, -1]), torch.Tensor([0, 0, 0, 0]), torch.Tensor([1, 2, 3, 4]), torch.Tensor([4, 3, 2, 1]),)
    prog = export(model, example_arguments)
    edge = to_edge(prog, compile_config=EdgeCompileConfig(_check_ir_validity=False, _skip_dim_order=True),)
    exec_prog = edge.to_executorch()

The export cause the following error:

W0809 14:35:01.298000 8394476544 torch/fx/experimental/symbolic_shapes.py:5140] [0/0] failed during evaluate_expr(Ne(u0, 4), hint=None, expect_rational=True, size_oblivious=False, forcing_spec=False
E0809 14:35:01.298000 8394476544 torch/fx/experimental/recording.py:298] [0/0] failed while running evaluate_expr(*(Ne(u0, 4), None), **{'fx_node': None})
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] failed while attempting to run meta for aten.add_.Tensor
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] Traceback (most recent call last):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     r = func(*args, **kwargs)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]         ^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     return self_._op(*args, **kwargs)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]            ^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     check_inplace_broadcast(self.shape, other.shape)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     if common_shape[idx] != shape[idx]:
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     return self.node.bool_()
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]            ^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     return self.guard_bool("", 0)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]            ^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     return retlog(fn(*args, **kwargs))
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]                   ^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     raise self._make_data_dependent_error(
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)).  (Size-like symbols: u0)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] 
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] Potential framework code culprit (scroll up for full backtrace):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     if common_shape[idx] != shape[idx]:
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] 
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For more information, run with TORCH_LOGS="dynamic"
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] 
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] User Stack (most recent call last):
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   (snipped, see stack below for prefix)
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]   File "~/test_executorch.py", line 23, in forward
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0]     b[invalid_a] += c
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] 
E0809 14:35:01.299000 8394476544 torch/_subclasses/fake_tensor.py:1882] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Traceback (most recent call last):
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1943, in run_node
    return node.target(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1143, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1559, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1240, in _cached_dispatch_impl
    output = self._dispatch_impl(func, types, args, kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
    r = func(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
    return self_._op(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
    check_inplace_broadcast(self.shape, other.shape)
  File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
    broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
    if common_shape[idx] != shape[idx]:
  File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
    return self.node.bool_()
           ^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
    return self.guard_bool("", 0)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
    r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
    return retlog(fn(*args, **kwargs))
                  ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
    return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
    raise self._make_data_dependent_error(
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)).  (Size-like symbols: u0)

Potential framework code culprit (scroll up for full backtrace):
  File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
    if common_shape[idx] != shape[idx]:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "~/test_executorch.py", line 23, in forward
    b[invalid_a] += c

For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1825, in get_fake_value
    ret_val = wrap_fake_exception(
              ^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1317, in wrap_fake_exception
    return fn()
           ^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1826, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1961, in run_node
    raise RuntimeError(make_error_message(e)).with_traceback(
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1943, in run_node
    return node.target(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/utils/_stats.py", line 21, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1143, in __torch_dispatch__
    return self.dispatch(func, types, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1559, in dispatch
    return self._cached_dispatch_impl(func, types, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1240, in _cached_dispatch_impl
    output = self._dispatch_impl(func, types, args, kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1878, in _dispatch_impl
    r = func(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_ops.py", line 727, in __call__
    return self_._op(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 3582, in meta_binop_inplace_alpha
    check_inplace_broadcast(self.shape, other.shape)
  File "~/lib/python3.11/site-packages/torch/_meta_registrations.py", line 86, in check_inplace_broadcast
    broadcasted_shape = tuple(_broadcast_shapes(self_shape, *args_shape))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
    if common_shape[idx] != shape[idx]:
  File "~/lib/python3.11/site-packages/torch/__init__.py", line 672, in __bool__
    return self.node.bool_()
           ^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 496, in bool_
    return self.guard_bool("", 0)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py", line 434, in guard_bool
    r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/recording.py", line 262, in wrapper
    return retlog(fn(*args, **kwargs))
                  ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5138, in evaluate_expr
    return self._evaluate_expr(orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec=forcing_spec)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5256, in _evaluate_expr
    raise self._make_data_dependent_error(
RuntimeError: Failed running call_function <built-in function iadd>(*(FakeTensor(..., size=(u0,)), FakeTensor(..., size=(4,))), **{}):
Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)).  (Size-like symbols: u0)

Potential framework code culprit (scroll up for full backtrace):
  File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
    if common_shape[idx] != shape[idx]:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "~/test_executorch.py", line 23, in forward
    b[invalid_a] += c

For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "~/test_executorch.py", line 33, in <module>
    prog = export(model, example_arguments)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/__init__.py", line 173, in export
    return _export(
           ^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 991, in wrapper
    raise e
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 974, in wrapper
    ep = fn(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/exported_program.py", line 100, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1863, in _export
    export_artifact = export_func(
                      ^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1107, in _strict_export
    return _strict_export_lower_to_aten_ir(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 1137, in _strict_export_lower_to_aten_ir
    gm_torch_level = _export_to_torch_ir(
                     ^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/export/_trace.py", line 544, in _export_to_torch_ir
    gm_torch_level, _ = torch._dynamo.export(
                        ^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1386, in inner
    result_traced = opt_f(*args, **kwargs)
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1727, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 435, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1716, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1727, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1121, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
    return _compile(
           ^^^^^^^^^
  File "~/lib/python3.11/contextlib.py", line 81, in inner
    return func(*args, **kwds)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 240, in time_wrapper
    r = func(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_utils_internal.py", line 85, in wrapper_function
    return StrobelightCompileTimeProfiler.profile_compile_time(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1280, in transform_code_object
    transformations(instructions, code_options)
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 581, in transform
    tracer.run()
  File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2498, in run
    super().run()
  File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 908, in run
    while self.step():
          ^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 820, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2064, in BINARY_OP
    return _binary_op_lookup[inst.arg](self, inst)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 234, in impl
    self.push(fn_var.call_function(self, self.popn(nargs), {}))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builtin.py", line 963, in call_function
    return handler(tx, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builtin.py", line 942, in _handle_insert_op_in_graph
    return wrap_fx_proxy(tx, proxy)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 1849, in wrap_fx_proxy
    return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 1936, in wrap_fx_proxy_cls
    example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1880, in get_fake_value
    raise UserError(  # noqa: B904
torch._dynamo.exc.UserError: Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time.  You will need to explicitly give hint to the compiler. Please take a look at torch._check OR torch._check_is_size APIs.  Could not guard on data-dependent expression Ne(u0, 4) (unhinted: Ne(u0, 4)).  (Size-like symbols: u0)

Potential framework code culprit (scroll up for full backtrace):
  File "~/lib/python3.11/site-packages/torch/_refs/__init__.py", line 405, in _broadcast_shapes
    if common_shape[idx] != shape[idx]:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "~/test_executorch.py", line 23, in forward
    b[invalid_a] += c

For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example

from user code:
   File "~/test_executorch.py", line 23, in forward
    b[invalid_a] += c

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

Versions

Collecting environment information... PyTorch version: 2.5.0.dev20240716 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: macOS 14.5 (arm64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.3.9.4) CMake version: version 3.30.1 Libc version: N/A

Python version: 3.11.0 (main, Mar 1 2023, 12:33:14) [Clang 14.0.6 ] (64-bit runtime) Python platform: macOS-14.5-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Apple M3 Pro

Versions of relevant libraries: [pip3] executorch==0.4.0a0+1114539 [pip3] executorchcoreml==0.0.1 [pip3] numpy==1.23.2 [pip3] torch==2.5.0.dev20240716 [pip3] torchaudio==2.4.0.dev20240716 [pip3] torchsr==1.0.4 [pip3] torchvision==0.20.0.dev20240716 [conda] executorch 0.4.0a0+11b2fcb pypi_0 pypi [conda] executorchcoreml 0.0.1 pypi_0 pypi [conda] numpy 1.23.2 pypi_0 pypi [conda] torch 2.5.0.dev20240716 pypi_0 pypi [conda] torchaudio 2.4.0.dev20240716 pypi_0 pypi [conda] torchsr 1.0.4 pypi_0 pypi [conda] torchvision 0.20.0.dev20240716 pypi_0 pypi

msluszniak avatar Aug 09 '24 12:08 msluszniak