benchmark icon indicating copy to clipboard operation
benchmark copied to clipboard

torchbench: torch.no_grad() not working for dynamo inductor backend

Open snadampal opened this issue 9 months ago • 4 comments

torch dynamo inductor backend is not seeing the torch.no_grad() setting from torchbench framework.

This is the reproducer python run_benchmark.py cpu --model hf_Bert --test eval --torchdynamo inductor

Here is the code snippet from torchbench that sets the no_grad() https://github.com/pytorch/benchmark/blob/main/torchbenchmark/util/framework/huggingface/model_factory.py#L120

with torch.no_grad():
            with self.amp_context():
                out = self.model(**self.example_inputs)

but the torch dynamo inductor backend isn't seeing the grad disabled, and hence is not triggering the graph level optimizations. https://github.com/pytorch/pytorch/blob/v2.3.0/torch/_inductor/compile_fx.py#L1270

snadampal avatar Apr 30 '24 15:04 snadampal

I found this stackover discussion, but not sure if it's related. https://stackoverflow.com/questions/75022490/pytorch-torch-no-grad-doesnt-affect-modules

snadampal avatar Apr 30 '24 15:04 snadampal

I can reproduce it using a toy module, reported to inductor upstream

xuzhao9 avatar May 03 '24 17:05 xuzhao9

great, thanks for reproducing it! I had raised the issue on pytorch repo too.

snadampal avatar May 03 '24 17:05 snadampal

The bug is confirmed by upstream Inductor developers but there is no quick way to fix it. I believe the best shot is to move the no_grad() context from model level to the framework level. I will work on that.

xuzhao9 avatar May 03 '24 18:05 xuzhao9