albanD

Results 263 comments of albanD

Hi, Sorry for the long time to answer. I think the question is best answered with an example so you can take a look at https://github.com/albanD/subclass_zoo/pull/36 Let me know what...

> How to rename Tensor.device.type as "my_device"? When you call `tensor.device` that will call `__torch_function__` for `Tensor.device.__get__`. You can then return anything you want from there. > the MyDeviceMode seems...

Hi, When you do `x.exp().sum().backward()`, do you expect `x.grad` to be populated? Or `x.tensor.grad` to be populated?

The high level idea is that you have to choose. Either `x` gets autograd or `x.tensor`. But it can't be both. Here is an extension to your script to show...

> the subclass constructor needs to retain the gradient, so in __new__(), we use torch.Tensor.as_subclass(tensor, cls) instead of torch.Tensor._make_subclass(cls, tensor), since the input tensor can be the output of a...

`t.data = ` is extremely dodgy to begin with! Like what are we doing here??? ```python t = torch.rand(2, requires_grad=True) t2 = t.clone() t2.data = torch.zeros(5, dtype=torch.int64) print(t2) # tensor([0,...

> If some tensor's autograd graph contains a node created by torch.compile (e.g. CompiledFunctionBackward), it should run compiled autograd. My question is why? This custom Function already properly contains the...

> If I understand the concern properly, CA should only handle gradients for nodes originating between the compiled start and compiled end regions in this example: My mental model of...