maml
maml copied to clipboard
Bump torch from 1.12.0 to 1.12.1
Bumps torch from 1.12.0 to 1.12.1.
Release notes
Sourced from torch's releases.
PyTorch 1.12.1 Release, small bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
Optim
- Remove overly restrictive assert in adam pytorch/pytorch#80222
Autograd
- Convolution forward over reverse internal asserts in specific case pytorch/pytorch#81111
- 25% Performance regression from v0.1.1 to 0.2.0 when calculating hessian pytorch/pytorch#82504
Distributed
- Fix distributed store to use add for the counter of DL shared seed pytorch/pytorch#80348
- Raise proper timeout when sharing the distributed shared seed pytorch/pytorch#81666
NN
- Allow register float16 weight_norm on cpu and speed up test pytorch/pytorch#80600
- Fix weight norm backward bug on CPU when OMP_NUM_THREADS <= 2 pytorch/pytorch#80930
- Weight_norm is not working with float16 pytorch/pytorch#80599
- New release breaks torch.nn.weight_norm backwards pass and breaks all Wav2Vec2 implementations pytorch/pytorch#80569
- Disable src mask for transformer and multiheadattention fastpath pytorch/pytorch#81277
- Make nn.stateless correctly reset parameters if the forward pass fails pytorch/pytorch#81262
- torchvision.transforms.functional.rgb_to_grayscale() + torch.nn.Conv2d() don`t work on 1080 GPU pytorch/pytorch#81106
- Transformer and CPU path with src_mask raises error with torch 1.12 pytorch/pytorch#81129
Data Loader
- [Locking lower ranks seed recepients https://github-redirect.dependabot.com/pytorch/pytorch/pull/81071
CUDA
- os.environ["CUDA_VISIBLE_DEVICES"] has no effect pytorch/pytorch#80876
- share_memory() on CUDA tensors no longer no-ops and instead crashes pytorch/pytorch#80733
- [Prims] Unbreak CUDA lazy init pytorch/pytorch#80899
- PyTorch 1.12 cu113 wheels cudnn discoverability issue pytorch/pytorch#80637
- Remove overly restrictive checks for cudagraph pytorch/pytorch#80881
ONNX
- ONNX cherry picks pytorch/pytorch#82435
MPS
- MPS cherry picks pytorch/pytorch#80898
Other
- Don't error if _warned_capturable_if_run_uncaptured not set pytorch/pytorch#80345
- Initializing libiomp5.dylib, but found libomp.dylib already initialized. pytorch/pytorch#78490
- Assertion error - _dl_shared_seed_recv_cnt - pt 1.12 - multi node pytorch/pytorch#80845
- Add 3.10 stdlib to torch.package pytorch/pytorch#81261
- CPU-only c++ extension libraries (functorch, torchtext) built against PyTorch wheels are not fully compatible with PyTorch wheels pytorch/pytorch#80489
Commits
664058f
Pin windows numpy (#82652) (#82686)efc2d08
Revert #75195 (#82504) (#82662)9a9dceb
ONNX cherry picks for 1.12.1 (#82435)617c4fe
Fix invalid read in masked softmax (#82272) (#82272) (#82405)f469bc1
[ci] Release only change: bump macos worker instance type (#82113)66f6e79
Fix deserialization of TransformerEncoderLayer (#81832) (#81832) (#82094)35eb488
[CI] Disable ios-12-5-1-x86-64 (#81612) (#81612) (#82096)e65e4ac
1.12.1/bt fix (#81952)e8534b9
MPS cherry picks for 1.12.1 (#81976)03b82bd
Disable XLA builds (#80099) (#80099) (#81977)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Dependabot tried to add @chc273
and @shyuep
as reviewers to this PR, but received the following error from GitHub:
POST https://api.github.com/repos/materialsvirtuallab/maml/pulls/454/requested_reviewers: 422 - Reviews may only be requested from collaborators. One or more of the users or teams you specified is not a collaborator of the materialsvirtuallab/maml repository. // See: https://docs.github.com/rest/reference/pulls#request-reviewers-for-a-pull-request