Results 23 issues of Wonjoo Lee

## 🚀 Feature As PyTorch/XLA migrates to the LTC (Lazy Tensor Core), we need to clean up the existing stub code (which spans over 6+ files) that were used to...

codegen

FIxes https://github.com/pytorch/xla/issues/3861 Codegen clamp.Tensor --- LazyIr.h: ``` class ClampTensor : public XlaNode { public: static torch::lazy::OpKind ClassOpKind() { return torch::lazy::OpKind(at::aten::clamp); } ClampTensor(const torch::lazy::Value& self, const c10::optional& min, const c10::optional& max,...

Fixes https://github.com/pytorch/xla/issues/3860 --- Codegen all.dim --- LazyIr.h ``` class AllDim : public XlaNode { public: static torch::lazy::OpKind ClassOpKind() { return torch::lazy::OpKind(at::aten::all); } AllDim(const torch::lazy::Value& self, const int64_t& dim, const bool&...

Codegen `clamp.Tensor`

codegen

Codegen `all.dim`

codegen

We're seeing some issues when we're trying to full codgen ops that has `at::Scalar` as part of their inputs like `addcmul` op. The generated `class Addcmul` in `LazyIr.h` looks like:...

codegen

Codegen take Shape inference PR on PyTorch: https://github.com/pytorch/pytorch/pull/82679 --- LazyIr.h: ``` class Take : public XlaNode { public: static torch::lazy::OpKind ClassOpKind() { return torch::lazy::OpKind(at::aten::take); } Take(const torch::lazy::Value& self, const torch::lazy::Value&...

REMOVE_TORCH_PIN
codegen

Codegen `take` op