Wei Wang

Results 61 comments of Wei Wang

Telling from the discussion, the issue was fixed from the huggingface side, thanks!

Closing as WAR has been suggested.

We detected this issue while trying to integrate v2.8.1 to fix https://github.com/Dao-AILab/flash-attention/issues/1743

But v2.7.4.post1 has this issue https://github.com/Dao-AILab/flash-attention/issues/1743 that v2.8.1 can fix.

fyi, we accidentally noticed that CUDA 12.4.0 could potentially work for Visual Studio 2022 17.10 e.g. https://github.com/pytorch/vision/actions/runs/9502323845/job/26190040599#step:11:154 is using Visual Studio 2022 Developer Command Prompt v17.10.1 and the CUDA version...

From my point of view, our documentation below was a little conservative CUDA Requirements: MSVC Version 193x Visual Studio 2022 17.x The https://github.com/pytorch/vision/actions/runs/9502323845/job/26190040599#step:11:154 is a proof that CUDA 12.4.0 indeed...

@pytorchbot merge -f "xpu failures not related"

Take python3.10 as example: 1) PyTorch binary uploaded to S3 (download.pytorch.org): 2025-09-04T13:04:29.9106614Z + aws s3 cp --no-progress --acl public-read /__w/_temp/artifacts/torch-2.9.0.dev20250904+cu130-cp310-cp310-manylinux_2_28_aarch64.whl s3://pytorch/whl/nightly/cu130/ --metadata checksum-sha256=e772a9db42f379d8961d7beca7bd169b8b6bdc8e3c71b32226932596b24b1e8c 2025-09-04T13:04:37.6558252Z upload: ../../_temp/artifacts/torch-2.9.0.dev20250904+cu130-cp310-cp310-manylinux_2_28_aarch64.whl to s3://pytorch/whl/nightly/cu130/torch-2.9.0.dev20250904+cu130-cp310-cp310-manylinux_2_28_aarch64.whl 2) Vision...

@atalman Could you please review again? Is it good to merge?