scratchai
                                
                                 scratchai copied to clipboard
                                
                                    scratchai copied to clipboard
                            
                            
                            
                        Bump torch from 1.1.0 to 1.9.0
Bumps torch from 1.1.0 to 1.9.0.
Release notes
Sourced from torch's releases.
PyTorch 1.9 Release, including Torch.Linalg and Mobile Interpreter
PyTorch 1.9 Release Notes
- Highlights
- Backwards Incompatible Change
- Deprecations
- New Features
- Improvements
- Bug Fixes
- Performance
- Documentation
Highlights
We are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. Highlights include:
- Major improvements to support scientific computing, including torch.linalg, torch.special, and Complex Autograd
- Major improvements in on-device binary size with Mobile Interpreter
- Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core
- Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support
- New APIs to optimize performance and packaging for model inference deployment
- Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler
We’d like to thank the community for their support and work on this latest release. We’d especially like to thank Quansight and Microsoft for their contributions.
You can find more details on all the highlighted features in the PyTorch 1.9 Release blogpost.
Backwards Incompatible changes
Python API
torch.dividewithrounding_mode='floor'now returns infinity when a non-zero number is divided by zero ([#56893](pytorch/pytorch#56893)). This fixes therounding_mode='floor'behavior to return the same non-finite values as other rounding modes when there is a division by zero. Previously it would always result in a NaN value, but a non-zero number divided by zero should return +/- infinity in IEEE floating point arithmetic. Note this does not effecttorch.floor_divideor the floor division operator, which currently userounding_mode='trunc'(and are also deprecated for that reason).
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- General Overview
- Cutting release branches
- Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates)
- Promoting RCs to Stable
- Special Topics
General Overview
Releasing a new version of PyTorch generally entails 3 major steps:
- Cutting a release branch and making release branch specific changes
- Drafting RCs (Release Candidates), and merging cherry picks
- Promoting RCs to stable
Cutting release branches
Release branches are typically cut from the branch
viable/strictas to ensure that tests are passing on the release branch.Release branches should be prefixed like so:
release/{MAJOR}.{MINOR}An example of this would look like:
release/1.8Please make sure to create branch that pins divergent point of release branch from the main branch, i.e.
orig/release/{MAJOR}.{MINOR}Making release branch specific changes
These are examples of changes that should be made to release branches so that CI / tooling can function normally on them:
- Update target determinator to use release branch:
- Example: pytorch/pytorch#40712
- Cutting a release branch on
pytorch/xla
- Example: pytorch/pytorch#40721
- Update backwards compatibility tests to use RC binaries instead of nightlies
... (truncated)
Commits
- d69c22d[docs] Add torch.package documentation for beta release (#59886)
- 4ad4f6dhold references to storages during TorchScript serializaiton (#59672)
- 90e6773[Release/1.9] Link whole CuDNN for CUDA-11.1 (#59873)
- 43c581aMake detach return an alias even under inference mode (#59633) (#59757)
- bc446f6Fix test_randperm_device_compatibility for 1 GPU (#59484) (#59502)
- abe996aMove CUDA async warning to suffix (#59467) (#59501)
- 795df76Do not use gold linker for CUDA builds (#59490) (#59500)
- 3b9cd08Prefer accurate reciprocal on ARMv8 (#59361) (#59470)
- 226c274Search for static OpenBLAS compiled with OpenMP (#59428) (#59463)
- ce24cabFix torch.randperm for CUDA (#59352) (#59452)
- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- @dependabot rebasewill rebase this PR
- @dependabot recreatewill recreate this PR, overwriting any edits that have been made to it
- @dependabot mergewill merge this PR after your CI passes on it
- @dependabot squash and mergewill squash and merge this PR after your CI passes on it
- @dependabot cancel mergewill cancel a previously requested merge and block automerging
- @dependabot reopenwill reopen this PR if it is closed
- @dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- @dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- @dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- @dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- @dependabot use these labelswill set the current labels as the default for future PRs for this repo and language
- @dependabot use these reviewerswill set the current reviewers as the default for future PRs for this repo and language
- @dependabot use these assigneeswill set the current assignees as the default for future PRs for this repo and language
- @dependabot use this milestonewill set the current milestone as the default for future PRs for this repo and language
- @dependabot badge mewill comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot dashboard:
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)