gym-cartpole-swingup
gym-cartpole-swingup copied to clipboard
chore(deps-dev): bump torch from 1.9.0 to 1.13.0
Bumps torch from 1.9.0 to 1.13.0.
Release notes
Sourced from torch's releases.
PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available
Pytorch 1.13 Release Notes
- Highlights
- Backwards Incompatible Changes
- New Features
- Improvements
- Performance
- Documentation
- Developers
Highlights
We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.
Summary:
The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.
Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.
Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to
import functorch
and use functorch without needing to install another package.PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.
Stable Beta Prototype Better TransformerCUDA 10.2 and 11.3 CI/CD Deprecation Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIsExtend NNC to support channels last and bf16Functorch now in PyTorch Core LibraryBeta Support for M1 devices Arm® Compute Library backend support for AWS Graviton CUDA Sanitizer You can check the blogpost that shows the new features here.
Backwards Incompatible changes
Python API
uint8 and all integer dtype masks are no longer allowed in Transformer (#87106)
Prior to 1.13,
key_padding_mask
could be set to uint8 or other integer dtypes inTransformerEncoder
andMultiheadAttention
, which might generate unexpected results. In this release, these dtypes are not allowed for the mask anymore. Please convert them totorch.bool
before using.1.12.1
>>> layer = nn.TransformerEncoderLayer(2, 4, 2) >>> encoder = nn.TransformerEncoder(layer, 2) >>> pad_mask = torch.tensor([[1, 1, 0, 0]], dtype=torch.uint8) >>> inputs = torch.cat([torch.randn(1, 2, 2), torch.zeros(1, 2, 2)], dim=1) # works before 1.13 >>> outputs = encoder(inputs, src_key_padding_mask=pad_mask)
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Special Topics
General Overview
Releasing a new version of PyTorch generally entails 3 major steps:
- Cutting a release branch preparations
- Cutting a release branch and making release branch specific changes
- Drafting RCs (Release Candidates), and merging cherry picks
- Promoting RCs to stable and performing release day tasks
Cutting a release branch preparations
Following Requirements needs to be met prior to final RC Cut:
- Resolve all outstanding issues in the milestones(for example 1.11.0)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :
python github_analyze.py --repo-path ~/local/pytorch --remote upstream --branch release/1.11 --milestone-id 26 --missing-in-branch
... (truncated)
Commits
7c98e70
attempted fix for nvrtc with lovelace (#87611) (#87618)4e1a4b1
fix docs push (#87498) (#87628)341c377
Add General Project Policies (#87385) (#87613)fdb18da
Fix distributed issue by including distributed files (#87612)8569a44
[MPS] Revamp copy_to_mps_ implementation (#87475)6a8be2c
[ONNX] Reland: Update training state logic to support ScriptedModule (#86745)...f6c42ae
Reenableisinstance
withtorch.distributed.ReduceOp
(#87303) (#87463)51fa4fa
Move PadNd from ATen/native to ATen (#87456)d3aecbd
Delete torch::deploy from pytorch core (#85953) (#85953) (#87454)d253eb2
Avoid calling logging.basicConfig (#86959) (#87455)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)