Cody Yu
Cody Yu
Thanks for the RFC. Will review next week.
> I think it is a good idea to invite people working on RISC-V support for TVM for review/discuss, since the RISC V vector extension is similar to ARM SVE....
Thanks fo the RFC. Although I didn't involve the actual Relax development, I've been attended the weekly open design review meeting for a while and I'm glad that I could...
In addition to the use cases and experience I've mentioned previously, I want to further highlight that symbolic shape support becomes even more important in these months for us at...
Maybe it's due to different build process? I didn't build PyTorch from source. Instead, I pip installed nightly PyTorch and manually installed corresponding libtorch. This is sufficient to build torch_xla...
The following usage with `xla_device()` results in the same issue: ```python >>> import torch_xla >>> import torch_xla.core.xla_model as xm >>> xm.xla_device() Traceback (most recent call last): File "", line 1,...
btw, this is the output from py-spy when it was hanging in HuggingFace: ``` Thread 118054 (idle): "MainThread" (torch_xla/core/xla_model.py:20) value (torch_xla/utils/utils.py:32) get_xla_supported_devices (torch_xla/core/xla_model.py:138) xla_device (torch_xla/core/xla_model.py:244) is_torch_tpu_available (transformers/utils/import_utils.py:409) _setup_devices (transformers/training_args.py:1328) wrapper...
I agree with you, but unfortunately this is not how HuggingFace uses... Specifically, they fall back to other backends when `is_torch_tpu_available()` (that calls `xla_device()`) returns False, and this function may...
I'm bumping this issue since I'm facing the same problem recently. My proposed approach is to build an interface for users to specify their convergence criteria. Just like the registration...
One workaround I could think of is like the following, although it's not clean ``` old_output_str = self.tokenizer.decode(self.output_ids) temp = self.tokenizer.decode([1, self.output_ids[0]], clean_up_tokenization_spaces=False) dummy = self.tokenizer.decode([1], clean_up_tokenization_spaces=False) if temp[len(dummy):].startswith(" "):...