Yukio Siraichi

Results 39 issues of Yukio Siraichi

Currently, I am using `numba` for JIT-ing the guards. Alternatively, we could use LLVM ORC JIT for building our own JIT for solely this purpose. #### Summary of the changes...

open source
release notes: fx
module: dynamo
ciflow/inductor

I am trying to define a custom function with lower precedence than multiplication. However, whenever there is a negative number involved, the parentheses are not there anymore. ----- **Custom function:**...

Bug
printing

```python >>> f = Symbol("f", real=True) >>> i = Symbol("i", integer=True) >>> Eq(f, 3.0) Eq(f, 3.0) >>> Eq(i, 3.0) False ``` Isn't the last one also supposed to be: `Eq(i,...

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #120117 This PR updates the list of benchmarks that should (not) be skipped. Here's a summary of the changes: - `detectron2_maskrcnn`: #120115...

open source
module: dynamo
ciflow/inductor

## 🐛 Bug After #6296, a few detectron2 benchmarks started failing when using XLA: ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda --repeat 2 \ --test eval --xla PJRT --dynamo...

xla:gpu

Re-land: #6624 This PR adds a fast path on top of #6624 changes. **Fast path:** keep old behavior of `as_strided_copy` - Check that the size and strides specify a non-overlapping...

xla:gpu

## Summary of Contributions (9th Feb) 1) **Improve the number of models in TorchBench that work with Dynamo as a tracer:** These passing rates are now comparable to those from...

xla:gpu

This PR adds a test for #6009. The actual fix is implemented in the PyTorch main repository: https://github.com/pytorch/pytorch/pull/121075 @miladm @JackCaoG

xla:gpu

This PR adds a test for #5837. The fix was introduced in PyTorch main repository (https://github.com/pytorch/pytorch/pull/121007), but we need PyTorch/XLA for actually exercising the (previously) failing test case. cc @miladm...

REMOVE_TORCH_PIN
xla:gpu

## 🐛 Bug After starting to run training benchmarks on AMP (#6518), `timm_nfnet` fails with the following error: ```python 2024-02-27 05:47:24.611656: F ./torch_xla/csrc/runtime/debug_macros.h:20] Non-OK-status: status.status() status: IN TERNAL: during context...

xla:gpu