Brian Hirsh

Results 100 comments of Brian Hirsh

Also tagging @zou3519 for the HOP changes

One thing I wonder if it’s worth doing: figure out how to trace get_rank() symbolically in the graph (like a dynamic shape / SymInt, but in a special way where...

> For now, I agree with @yf225 on making such optimizations "use at your own risk", opt-in passes. Sounds good! > It seems that it's quite challenging to cover all...

Hey @sampepose - @ezyang , @eellison and I agreed that I wouldn’t land this fix, and instead would try to have inductor “fix” output strides to match eager in the...

This PR is subsumed by https://github.com/pytorch/pytorch/pull/121688

Hey @jay746, just confirming, but did you file this as a regression? I also tried installing torch 2.0.0, and confirmed that I see the same segfault. So looks like a...

`sgn` also has a backward formula that uses efficientzerotensor: https://github.com/pytorch/pytorch/blob/main/torch/csrc/autograd/FunctionsManual.cpp#L578 This looks like a bad interaction between efficientzerotensor and functionalization. Here's a minimal repro: ``` import torch def f(): torch._enable_functionalization(reapply_views=True)...

I'm tentatively removing hi-pri, since this is "fixed" on tip of main (you cannot repro the segfault). Since this is not a regression, I'm not sure that this is a...

I wanted to leave this issue open and mark it triage review though, because the state of ZeroTensor with `torch.compile` (and `make_fx` in particular) seems suboptimal. For example, the below...

Moving this issue to `torch/text`, since it looks specific to some torchtext APIs.