lightning-flash
lightning-flash copied to clipboard
build(deps): bump torch from 1.10.2 to 2.0.1 in /requirements
Bumps torch from 1.10.2 to 2.0.1.
Release notes
Sourced from torch's releases.
PyTorch 2.0.1 Release, bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
- Fix
_canonical_maskthrows warning when bool masks passed as input to TransformerEncoder/TransformerDecoder (#96009, #96286)- Fix Embedding bag max_norm=-1 causes leaf Variable that requires grad is being used in an in-place operation #95980
- Fix type hint for torch.Tensor.grad_fn, which can be a torch.autograd.graph.Node or None. #96804
- Can’t convert float to int when the input is a scalar np.ndarray. #97696
- Revisit torch._six.string_classes removal #97863
- Fix module backward pre-hooks to actually update gradient #97983
- Fix load_sharded_optimizer_state_dict error on multi node #98063
- Warn once for TypedStorage deprecation #98777
- cuDNN V8 API, Fix incorrect use of emplace in the benchmark cache #97838
Torch.compile:
- Add support for Modules with custom getitem method to torch.compile #97932
- Fix improper guards with on list variables. #97862
- Fix Sequential nn module with duplicated submodule #98880
Distributed:
- Fix distributed_c10d's handling of custom backends #95072
- Fix MPI backend not properly initialized #98545
NN_frontend:
- Update Multi-Head Attention's doc string #97046
- Fix incorrect behavior of
is_causalparemeter for torch.nn.TransformerEncoderLayer.forward #97214- Fix error for SDPA on sm86 and sm89 hardware #99105
- Fix nn.MultiheadAttention mask handling #98375
DataLoader:
- Fix regression for pin_memory recursion when operating on bytes #97737
- Fix collation logic #97789
- Fix Ppotentially backwards incompatible change with DataLoader and is_shardable Datapipes #97287
MPS:
- Fix LayerNorm crash when input is in float16 #96208
- Add support for cumsum on int64 input #96733
- Fix issue with setting BatchNorm to non-trainable #98794
Functorch:
- Fix Segmentation Fault for vmaped function accessing BatchedTensor.data #97237
- Fix index_select support when dim is negative #97916
- Improve docs for autograd.Function support #98020
- Fix Exception thrown when running Migration guide example for jacrev #97746
Releng:
- Fix Convolutions for CUDA-11.8 wheel builds #99451
- Fix Import torchaudio + torch.compile crashes on exit #96231
- Linux aarch64 wheels are missing the mkldnn+acl backend support - https://github.com/pytorch/builder/commit/54931c264ed3e7346899f547a272c4329cc8933b
- Linux aarch64 torchtext 0.15.1 wheels are missing for aarch64_linux platform - pytorch/builder#1375
- Enable ROCm 5.4.2 manywheel and python 3.11 builds #99552
- PyTorch cannot be installed at the same time as numpy in a conda env on osx-64 / Python 3.11 #97031
- Illegal instruction (core dumped) on Raspberry Pi 4.0 8gb - pytorch/builder#1370
Torch.optim:
- Fix fused AdamW causes NaN loss #95847
- Fix Fused AdamW has worse loss than Apex and unfused AdamW for fp16/AMP #98620
The release tracker should contain all relevant pull requests related to this release as well as links to related issues
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- Release Compatibility Matrix
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Drafting RCs (https://github.com/pytorch/pytorch/blob/main/Release Candidates) for PyTorch and domain libraries
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Special Topics
Release Compatibility Matrix
Following is the Release Compatibility Matrix for PyTorch releases:
PyTorch version Python Stable CUDA Experimental CUDA 2.0 >=3.8, <=3.11 CUDA 11.7, CUDNN 8.5.0.96 CUDA 11.8, CUDNN 8.7.0.84 1.13 >=3.7, <=3.10 CUDA 11.6, CUDNN 8.3.2.44 CUDA 11.7, CUDNN 8.5.0.96 1.12 >=3.7, <=3.10 CUDA 11.3, CUDNN 8.3.2.44 CUDA 11.6, CUDNN 8.3.2.44 General Overview
... (truncated)
Commits
e9ebda2[2.0.1] Disable SDPA FlashAttention backward and mem eff attention on sm86+ f...9e8bd61Fix tuple iterator issue (#99443)e4bdb86Support Modules with custom getitem method through fallback (#97932) (#98...55b4f95aot autograd: handle detach() and no_grad() mutations on input (#95980) (#99740)6943c4bRemove redundantfound_infrecompute from_step_supports_amp_unscalingpa...91c455eUpdate MHA doc string (#97046) (#99746)c83bbdcFix NumPy scalar arrays to tensor conversion (#97696) (#99732)661fa0cRemove rocm python 3.11 restriction (#99552)0f49e97[release 2.0.1] [fix] fix load_sharded_optimizer_state_dict error on multi no...b90fd01Fix flaky Dynamo export tests (#96488) (#99459)- Additional commits viewable in compare view
You can trigger a rebase of this PR by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Note Automatic rebases have been disabled on this pull request as it has been open for over 30 days.
Codecov Report
Merging #1654 (14a483f) into master (fc6c97a) will decrease coverage by
11%. The diff coverage isn/a.
Additional details and impacted files
@@ Coverage Diff @@
## master #1654 +/- ##
========================================
- Coverage 85% 74% -11%
========================================
Files 291 291
Lines 12856 12856
========================================
- Hits 10985 9518 -1467
- Misses 1871 3338 +1467