JackCaoG

Results 401 comments of JackCaoG

Ah OK, I think `CompositeExplicitAutogradNonFunctional` issue is related to I have `torch 1.12` preinstalled on TPUVM. The real build error is related to `kl_div_backward`

Hi @butchland, thanks for reporting! Could you follow the instruction in [here](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#using-debug_runpy-to-collect-debug-information) to run a debug run? This way we can know what exactually happened. My guess would be that...

Oh, so it looks like you are running a small code snippet and does not finish a full step so the metric report is not generated. Do you mind running...

yup, you are right.. `_local_scalar_dense` most likely comes from pytorch `item()` call. The other two looks like we need to add a lowering. We are a bit busy with the...

yup I will update this thread when I make any progress on lowering these two ops. We have a section in [here](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats) talking about `item` call, the take away is...

affine_grid should be supported now. To get a better understanding of the problem, doing a metric report ``` import torch_xla.debug.metrics as met print(met.metrics_report()) ``` after a step will help

@dhruvrnaik if you have a small repo I might be able to take a look. It depends on what op that `transforms.Resize` get decompose by pytorch and passed to us.

Taking another look of the cpu `grid_sampler_2d` implementation which seems to be play with the stride quite a bit. This kind of op is pretty difficult to lower for xla...

I don't think we have anything similar to grid_sample. PyTorch/XLA supports `upsample_nearest2d` but I felt like that's not what you want. I was trying to find if there is a...

Want to make sure that this flag is harmless(seems like that's the case) without corresponding tf change. We update tf version manually, next update will happens in couple weeks.