torchTS
torchTS copied to clipboard
Bump torch from 1.11.0 to 2.3.1
Bumps torch from 1.11.0 to 2.3.1.
Release notes
Sourced from torch's releases.
PyTorch 2.3.1 Release, bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
Torch.compile:
- Remove runtime dependency on JAX/XLA, when importing
torch.__dynamo(pytorch/pytorch#124634)- Hide
Plan failed with a cudnnExceptionwarning (pytorch/pytorch#125790)- Fix CUDA memory leak (pytorch/pytorch#124238) (pytorch/pytorch#120756)
Distributed:
- Fix
format_utils executable, which was causing it to run as a no-op (pytorch/pytorch#123407)- Fix regression with
device_meshin 2.3.0 during initialization causing memory spikes (pytorch/pytorch#124780)- Fix crash of
FSDP + DTensorwithShardingStrategy.SHARD_GRAD_OP(pytorch/pytorch#123617)- Fix failure with distributed checkpointing + FSDP if at least 1 forward/backward pass has not been run. (pytorch/pytorch#121544) (pytorch/pytorch#127069)
- Fix error with distributed checkpointing + FSDP, and with
use_orig_params = Falseand activation checkpointing (pytorch/pytorch#124698) (pytorch/pytorch#126935)- Fix
set_model_state_dicterrors on compiled module with non-persistent buffer with distributed checkpointing (pytorch/pytorch#125336) (pytorch/pytorch#125337)MPS:
- Fix data corruption when coping large (>4GiB) tensors (pytorch/pytorch#124635)
- Fix
Tensor.abs()for complex (pytorch/pytorch#125662)Packaging:
- Fix UTF-8 encoding on Windows
.pyifiles (pytorch/pytorch#124932)- Fix
import torchfailure when wheel is installed for a single user on Windows(pytorch/pytorch#125684)- Fix compatibility with torchdata 0.7.1 (pytorch/pytorch#122616)
- Fix aarch64 docker publishing to https://ghcr.io (pytorch/pytorch#125617)
- Fix performance regression an aarch64 linux (pytorch/builder#1803)
Other:
- Fix DeepSpeed transformer extension build on ROCm (pytorch/pytorch#121030)
- Fix kernel crash on
tensor.dtype.to_complex()after ~100 calls in ipython kernel (pytorch/pytorch#125154)Release tracker pytorch/pytorch#125425 contains all relevant pull requests related to this release as well as links to related issues.
PyTorch 2.3: User-Defined Triton Kernels in torch.compile, Tensor Parallelism in Distributed
PyTorch 2.3 Release notes
- Highlights
- Backwards Incompatible Changes
- Deprecations
- New Features
- Improvements
- Bug fixes
- Performance
- Documentation
Highlights
We are excited to announce the release of PyTorch® 2.3! PyTorch 2.3 offers support for user-defined Triton kernels in torch.compile, allowing for users to migrate their own Triton kernels from eager without experiencing performance complications or graph breaks. As well, Tensor Parallelism improves the experience for training Large Language Models using native PyTorch functions, which has been validated on training runs for 100B parameter models.
This release is composed of 3393 commits and 426 contributors since PyTorch 2.2. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.3. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- Release Compatibility Matrix
- Release Cadence
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Running Launch Execution team Core XFN sync
- Drafting RCs (https://github.com/pytorch/pytorch/blob/main/Release Candidates) for PyTorch and domain libraries
- Preparing and Creating Final Release candidate
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Submitting Tutorials
- Special Topics
Release Compatibility Matrix
Following is the Release Compatibility Matrix for PyTorch releases:
... (truncated)
Commits
63d5e92[EZ] Pin scipy to 1.12 for Py-3.12 (#127322)91bdec3Update hf_BirdBird periodic-dynamo-benchmarks results (#127312)d44533fPut back "[Release only] Release 2.3 start using triton package from pypi"" (...bd1040c[DSD] Fix to remove non_persistent buffer in distributed state dict (#125337)...81b8854[DSD] Add a test to verify FSDP lazy initialization case (#127069) (#127130)e63004b[DCP][state_dict] Remove the check of FSDP has root (#121544) (#126557)00804a7[DSD] Correctly handle _extra_state (#125336) (#126567)cd033a1[Cherry-pick][DCP][AC] Add test for apply AC with FSDP1 (#126935) (#126992)19058a6Remove activation checkpointing tag to get correct FQNs (#124698) (#126559)30650e0[FSDP1] fix _same_storage check for DTensor (#123617) (#126957)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)