pytorch icon indicating copy to clipboard operation
pytorch copied to clipboard

Fix broken dynamo cudagraph test

Open msaroufim opened this issue 3 years ago • 2 comments

Tried to run the test locally and discovered that AOTAutograd has been changed to AotAutograd which broke 7 of the dynamo cuda graphs tests

I guess this definitively answers the question on whether this should be torchdynamo XD # TODO: maybe this should live in torchdynamo instead

There's still one test failing but to make it green for now I just added torchdynamo.config.raise_on_backend_error = False

Repro of problem before fix

>>> from torchdynamo.optimizations.training import AOTAutogradStrategy
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'AOTAutogradStrategy' from 'torchdynamo.optimizations.training' (/home/ubuntu/torchdynamo/torchdynamo/optimizations/training.py)

Logs after

(dynamo) ubuntu@ip-172-31-27-252:~/pytorch/test$ pytest test_dynamo_cudagraphs.py 
========================================================================================= test session starts ==========================================================================================
platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/ubuntu/pytorch, configfile: pytest.ini
collected 8 items                                                                                                                                                                                      

test_dynamo_cudagraphs.py .......F                                                                                                                                                               [100%]

=============================================================================================== FAILURES ===============================================================================================
______________________________________________________________________________ TestDynamoCudaGraphs.test_mutated_metadata ______________________________________________________________________________
Traceback (most recent call last):
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 363, in call_user_compiler
    compiled_fn = self.compiler_fn(gm, self.example_inputs())
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 191, in __call__
    result = self.candidate(*self.example_inputs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 696, in forward
    return compiled_f(
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 224, in forward
    fx_g = make_fx(joint_forward_backward, aot_decompositions)(
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 568, in wrapped
    t = dispatch_trace(wrap_key(f, args, proxy_mode), tracer=fx_tracer, concrete_args=tuple(phs))
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 351, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 715, in trace
    (self.create_arg(fn(*args)),),
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 550, in flatten_fn
    tree_out = root_fn(*tree_args)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 376, in wrapped
    out = f(*tree_args)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 77, in joint_forward_backward
    outs = fn(*primals)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 660, in functional_call
    out = mod(*args[params_len:], **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 655, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 277, in __call__
    raise e
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 267, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 693, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 320, in call_module
    return forward(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 686, in forward
    return _orig_module_call(mod, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "<eval_with_key>.86", line 6, in forward
    resize_ = clone.resize_(20)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
    return f(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 404, in __torch_dispatch__
    return proxy_call(self, func_overload, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 176, in proxy_call
    func_overload(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/_ops.py", line 56, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Trying to resize storage that is not resizable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/pytorch/test/test_dynamo_cudagraphs.py", line 169, in test_mutated_metadata
    rx = model(x)
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 151, in catch_errors
    return callback(frame, cache_size)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 347, in _convert_frame
    result = inner_convert(frame, cache_size)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 108, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 288, in _convert_frame_assert
    code = transform_code_object(frame.f_code, transform)
  File "/home/ubuntu/torchdynamo/torchdynamo/bytecode_transformation.py", line 338, in transform_code_object
    transformations(instructions, code_options)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 264, in transform
    tracer.run()
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 312, in run
    and self.step()
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 290, in step
    getattr(self, inst.opname)(inst)
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1342, in RETURN_VALUE
    self.output.compile_subgraph(self)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 284, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 349, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 372, in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e) from e
torchdynamo.exc.BackendCompilerFailed: ? raised RuntimeError: Trying to resize storage that is not resizable

You can suppress this exception and fall back to eager by setting:
    torchdynamo.config.raise_on_backend_error = False
----------------------------------------------------------------------------------------- Captured stderr call -----------------------------------------------------------------------------------------
----------------------------------------
TORCHDYNAMO: backend compiler failed
Traceback (most recent call last):
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 363, in call_user_compiler
    compiled_fn = self.compiler_fn(gm, self.example_inputs())
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 191, in __call__
    result = self.candidate(*self.example_inputs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 696, in forward
    return compiled_f(
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 224, in forward
    fx_g = make_fx(joint_forward_backward, aot_decompositions)(
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 568, in wrapped
    t = dispatch_trace(wrap_key(f, args, proxy_mode), tracer=fx_tracer, concrete_args=tuple(phs))
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 351, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 715, in trace
    (self.create_arg(fn(*args)),),
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 550, in flatten_fn
    tree_out = root_fn(*tree_args)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 376, in wrapped
    out = f(*tree_args)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 77, in joint_forward_backward
    outs = fn(*primals)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 660, in functional_call
    out = mod(*args[params_len:], **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 655, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 277, in __call__
    raise e
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 267, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 693, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 320, in call_module
    return forward(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 686, in forward
    return _orig_module_call(mod, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "<eval_with_key>.86", line 6, in forward
    resize_ = clone.resize_(20)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
    return f(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 404, in __torch_dispatch__
    return proxy_call(self, func_overload, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 176, in proxy_call
    func_overload(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/_ops.py", line 56, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Trying to resize storage that is not resizable
----------------------------------------
------------------------------------------------------------------------------------------ Captured log call -------------------------------------------------------------------------------------------
ERROR    torchdynamo.eval_frame:eval_frame.py:201 error in verify_correctness
Traceback (most recent call last):
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 191, in __call__
    result = self.candidate(*self.example_inputs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 696, in forward
    return compiled_f(
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 224, in forward
    fx_g = make_fx(joint_forward_backward, aot_decompositions)(
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 568, in wrapped
    t = dispatch_trace(wrap_key(f, args, proxy_mode), tracer=fx_tracer, concrete_args=tuple(phs))
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 351, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 715, in trace
    (self.create_arg(fn(*args)),),
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 550, in flatten_fn
    tree_out = root_fn(*tree_args)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 376, in wrapped
    out = f(*tree_args)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 77, in joint_forward_backward
    outs = fn(*primals)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 660, in functional_call
    out = mod(*args[params_len:], **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 655, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 277, in __call__
    raise e
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 267, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 693, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 320, in call_module
    return forward(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 686, in forward
    return _orig_module_call(mod, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "<eval_with_key>.86", line 6, in forward
    resize_ = clone.resize_(20)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
    return f(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 404, in __torch_dispatch__
    return proxy_call(self, func_overload, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 176, in proxy_call
    func_overload(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/_ops.py", line 56, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Trying to resize storage that is not resizable
ERROR    root:eval_frame.py:154 Error while processing frame
Traceback (most recent call last):
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 363, in call_user_compiler
    compiled_fn = self.compiler_fn(gm, self.example_inputs())
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 191, in __call__
    result = self.candidate(*self.example_inputs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 696, in forward
    return compiled_f(
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 224, in forward
    fx_g = make_fx(joint_forward_backward, aot_decompositions)(
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 568, in wrapped
    t = dispatch_trace(wrap_key(f, args, proxy_mode), tracer=fx_tracer, concrete_args=tuple(phs))
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 351, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 94, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 715, in trace
    (self.create_arg(fn(*args)),),
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 550, in flatten_fn
    tree_out = root_fn(*tree_args)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 376, in wrapped
    out = f(*tree_args)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 77, in joint_forward_backward
    outs = fn(*primals)
  File "/home/ubuntu/functorch/functorch/_src/aot_autograd.py", line 660, in functional_call
    out = mod(*args[params_len:], **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 655, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 277, in __call__
    raise e
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/graph_module.py", line 267, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 693, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 320, in call_module
    return forward(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py", line 686, in forward
    return _orig_module_call(mod, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "<eval_with_key>.86", line 6, in forward
    resize_ = clone.resize_(20)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/utils/_python_dispatch.py", line 74, in wrapped
    return f(self, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 404, in __torch_dispatch__
    return proxy_call(self, func_overload, args, kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/fx/experimental/proxy_tensor.py", line 176, in proxy_call
    func_overload(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/dynamo/lib/python3.8/site-packages/torch/_ops.py", line 56, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Trying to resize storage that is not resizable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/torchdynamo/torchdynamo/eval_frame.py", line 151, in catch_errors
    return callback(frame, cache_size)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 347, in _convert_frame
    result = inner_convert(frame, cache_size)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 108, in _fn
    return fn(*args, **kwargs)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 288, in _convert_frame_assert
    code = transform_code_object(frame.f_code, transform)
  File "/home/ubuntu/torchdynamo/torchdynamo/bytecode_transformation.py", line 338, in transform_code_object
    transformations(instructions, code_options)
  File "/home/ubuntu/torchdynamo/torchdynamo/convert_frame.py", line 264, in transform
    tracer.run()
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 312, in run
    and self.step()
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 290, in step
    getattr(self, inst.opname)(inst)
  File "/home/ubuntu/torchdynamo/torchdynamo/symbolic_convert.py", line 1342, in RETURN_VALUE
    self.output.compile_subgraph(self)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 284, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 349, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
  File "/home/ubuntu/torchdynamo/torchdynamo/output_graph.py", line 372, in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e) from e
torchdynamo.exc.BackendCompilerFailed: ? raised RuntimeError: Trying to resize storage that is not resizable

You can suppress this exception and fall back to eager by setting:
    torchdynamo.config.raise_on_backend_error = False
======================================================================================= short test summary info ========================================================================================
FAILED test_dynamo_cudagraphs.py::TestDynamoCudaGraphs::test_mutated_metadata - torchdynamo.exc.BackendCompilerFailed: ? raised RuntimeError: Trying to resize storage that is not resizable
=============================================================================== 1 failed, 7 passed, 9 warnings in 4.06s ================================================================================

msaroufim avatar Aug 09 '22 23:08 msaroufim

:link: Helpful links

:white_check_mark: No Failures (0 Pending)

As of commit e0ecef9c86 (more details on the Dr. CI page):

Expand to see more

:green_heart: :green_heart: Looks good so far! There are no failures yet. :green_heart: :green_heart:


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

facebook-github-bot avatar Aug 09 '22 23:08 facebook-github-bot

This will be subsumed by https://github.com/pytorch/torchdynamo/pull/757

ezyang avatar Aug 10 '22 02:08 ezyang

Sounds good, I'll repurpose this PR to delete what we no longer need in pytorch/pytorch

Tomorrow I'm spending some time looking at improving logging in the fx partitioner

msaroufim avatar Aug 10 '22 04:08 msaroufim

Yes plz

ezyang avatar Aug 10 '22 04:08 ezyang

@pytorchbot merge -g

msaroufim avatar Aug 10 '22 17:08 msaroufim

@pytorchbot successfully started a merge job. Check the current status here

pytorchmergebot avatar Aug 10 '22 17:08 pytorchmergebot

Hey @msaroufim. You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'. For changes that are 'topic: not user facing' there is no need for a release notes label.

github-actions[bot] avatar Aug 10 '22 20:08 github-actions[bot]