Results 23 issues of Wonjoo Lee

Fixes https://github.com/pytorch/pytorch/pull/85638 --- The upstream PR makes `upsample_*.vec ops` CompositeImplicit, so the backward does not need an explicit implementation anymore.

pytorch breaking
REMOVE_TORCH_PIN

TODO - Decide what to do with incompatible types - Unit tests

ltc

Fixes https://github.com/pytorch/xla/issues/3924 --- Codegen `flip` op --- LazyIr.h: ``` class Flip : public XlaNode { public: static torch::lazy::OpKind ClassOpKind() { return torch::lazy::OpKind(at::aten::flip); } Flip(const torch::lazy::Value& self, const ::std::vector& dims, std::vector&&...

codegen

Codegen `fmod.Scalar` op

codegen

Codegen `fmod.Tensor` op

codegen

Codegen `flip` op

codegen

Codegen `elu_backward`

codegen

As a follow-up to https://github.com/pytorch/xla/issues/6294, we realized that many ops in PyTorch/XLA were lowered in IR-level. This caused multiple unnecessary decompositions, causing slowdown in tracing time. See also: https://github.com/pytorch/xla/issues/6294. This...

good first issue

In order for PyTorch/XLA to support the PyTorch core ATen opset, it requires lowering each core ATen op in PyTorch/XLA. This issue is used to track the PyTorch/XLA lowering for...

good first issue
core aten opset

With https://github.com/pytorch/xla/pull/6559, we found out that `grid_sampler_2d` also needs lowering.