schung-amd
schung-amd
Thanks for the clarifying followup @doru1004, you're correct. The issue lies with tidx as you stated, and modifying the logic so that tidx is not involved in the bounds (i.e....
Hi @Lookforworld, thanks for the detailed report. I believe the error messages you're seeing in `hipcc --version` and `amd-smi` are normal for WSL2 configurations and unrelated to your issue. I...
@Lookforworld I noticed you're doing all of this as the root user inside WSL. While the guide doesn't explicitly forbid you from being root, it's intended to be followed as...
@fabiano-amaral Please reinstall using the instructions in https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/wsl/install-radeon.html. Afterward, `python3 -m torch.utils.collect_env` should show `PyTorch version: 2.1.2+rocm6.1.3` (you have `2.5.0.dev20240802+rocm6.1` here), and a special version of Adrenalin 24.6.1 with WSL...
@fabiano-amaral Correct, after manually installing 24.6.1 with WSL support from the link in the guide, you should not update in the software.
To clarify, were you able to follow the whole guide as a non-root user, and if so, are you still experiencing the issue with torch.cuda.is_available() afterward?
Interesting, glad you were able to track the issue down on your end @Lookforworld. Does `torch.cuda.is_available()` now output true as expected, and do you get meaningful output from `python3 -m...
Hi @Lookforworld, are you still hanging in torch and llama? If so, can you provide some more system information? We're looking for: - Motherboard model, BIOS version/date and SMBIOS version...
Thanks for the info! Can you also provide the output of `wsl --version` from the Windows side, and `python3 -m torch.utils.collect_env` inside WSL (if that's working for you now, I...
Hi @nazar-pc, `hipInfo` also does not report cooperative launch support on Windows with a 7900XTX, so it seems the issue does lie with HIP SDK as you suggest. I'll check...