Yukio Siraichi

Results 39 issues of Yukio Siraichi

## 🐛 Bug After converting the `Super_SloMo` model to `bfloat16`, running inference on dynamo raises the following error: ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda \ --xla PJRT --dynamo...

xla:gpu

## 🐛 Bug After converting the `hf_GPT2` (and its large variation) model to `bfloat16` and running it (see command below), it fails with the following error: ```bash python xla/benchmarks/experiment_runner.py \...

xla:gpu

## 🐛 Bug After converting the `vision_maskrcnn` model to `bfloat16`, running inference on dynamo raises the following error: ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda \ --xla PJRT --dynamo...

xla:gpu

## 🐛 Bug ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda \ --xla PJRT --dynamo None --test eval \ --no-resume --print-subprocess \ -k hf_GPT2 ``` | | compilation time (s)...

xla:gpu

## 🐛 Bug ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda \ --xla PJRT --dynamo None --test eval \ --no-resume --print-subprocess \ -k hf_GPT2 ``` | | compilation time (s)...

xla:gpu

## 🐛 Bug After converting the `Background_Matting` model to `bfloat16` and running it (see command below), it fails with the following error: ```bash python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda...

xla:gpu

## 🐛 Bug Running a few torchbench benchmarks, using dynamo+openxla backend, ends up in an assertion failure: ```python Traceback (most recent call last): File "torch/_dynamo/eval_frame.py", line 489, in _fn return...

xla:gpu

## 🐛 Bug ```python @torch.compile(backend="openxla") def foo(x): with torch_xla.amp.autocast(xm.xla_device(), dtype=torch.bfloat16): y = x * 5 return y x = torch.rand(5, device=xm.xla_device()) foo(x) ``` ``` Traceback (most recent call last): File...

xla:gpu

This issue tracks the development of the output upcasting issue present in PyTorch/XLA. - [ ] Main issue: #6403 - [x] Temporary fix for the benchmarking scripts: #6389 - [x]...

xla:gpu

I figured we should, at some point tackle this, since it is kind of annoying. However, laziness had taken over all my time for reporting this issue. We've reached a...

enhancement