torchfix
torchfix copied to clipboard
Avoid running TOR003 by default
Today, use_reentrant defaults to True, but TOR003 sets use_reentrant=False which may subtly differ in behavior in certain cases. We should either make TOR003 set use_reentrant=True, or not run it by default.
cc @kit1980
I think that's fine that sometimes autofix is not correct, as it's not possible to make it work correctly in every case. From README:
Please keep in mind that autofix is a best-effort mechanism. Given the dynamic nature of Python, and especially the beta version status of TorchFix, it's very difficult to have certainty when making changes to code, even for the seemingly trivial fixes
The idea is to provide the recommended value (as the doc for checkpoint says) and make the users test if it's working for them.
Otherwise users will just continue to use use_reentrant=True when it's not needed.
There is another rule with similar risk of unsafe behavior - "TOR102 torch.load without weights_only parameter is unsafe".
Maybe we should designate autofixes as safe or unsafe, like ruff started to do recently https://docs.astral.sh/ruff/linter/#fix-safety.
Or maybe there should be a separate configuration for fixes: the rule may run by default, but not autofix by default.
And maybe it's possible to detect some cases when use_reentrant=False certainly will not work statically from the code?
Maybe we should designate autofixes as safe or unsafe, like ruff started to do recently
Maybe we could offer safe and unsafe versions of each fix when possible? If both modes exist, I would imagine users would first try running with unsafe, and if anything breaks, fallback to safe mode. I feel like this would be a good default for the users who don't have the time to actually dive in to debug anything, but still want to make sure their code is compliant for the most part.
And maybe it's possible to detect some cases when use_reentrant=False certainly will not work statically from the code?
Yeah this will be hard to do statically, unfortunately