conditional-flow-matching
conditional-flow-matching copied to clipboard
Update torch requirement from <2.0.0,>=1.11.0 to >=1.11.0,<3.0.0
Updates the requirements on torch to permit the latest version.
Release notes
Sourced from torch's releases.
PyTorch 2.1: automatic dynamic shape compilation, distributed checkpointing
PyTorch 2.1 Release Notes
- Highlights
- Backwards Incompatible Change
- Deprecations
- New Features
- Improvements
- Bug fixes
- Performance
- Documentation
- Developers
- Security
Highlights
We are excited to announce the release of PyTorch® 2.1! PyTorch 2.1 offers automatic dynamic shape support in torch.compile, torch.distributed.checkpoint for saving/loading distributed training jobs on multiple ranks in parallel, and torch.compile support for the NumPy API.
In addition, this release offers numerous performance improvements (e.g. CPU inductor improvements, AVX512 support, scaled-dot-product-attention support) as well as a prototype release of torch.export, a sound full-graph capture mechanism, and
torch.export-based quantization.Along with 2.1, we are also releasing a series of updates to the PyTorch domain libraries. More details can be found in the library updates blog.
This release is composed of 6,682 commits and 784 contributors since 2.0. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.1. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.
Summary:
torch.compilenow includes automatic support for detecting and minimizing recompilations due to tensor shape changes using automatic dynamic shapes.torch.distributed.checkpointenables saving and loading models from multiple ranks in parallel, as well as resharding due to changes in cluster topology.torch.compilecan now compile NumPy operations via translating them into PyTorch-equivalent operations.torch.compilenow includes improved support for Python 3.11.- New CPU performance features include inductor improvements (e.g. bfloat16 support and dynamic shapes), AVX512 kernel support, and scaled-dot-product-attention kernels.
torch.export, a sound full-graph capture mechanism is introduced as a prototype feature, as well as torch.export-based quantization.torch.sparsenow includes prototype support for semi-structured (2:4) sparsity on NVIDIA® GPUs.
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- Release Compatibility Matrix
- Release Cadence
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Drafting RCs (https://github.com/pytorch/pytorch/blob/main/Release Candidates) for PyTorch and domain libraries
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Submitting Tutorials
- Special Topics
Release Compatibility Matrix
Following is the Release Compatibility Matrix for PyTorch releases:
PyTorch version Python Stable CUDA Experimental CUDA 2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
... (truncated)
Commits
7bcf7daAdd tensorboard to pip requirements (#109349) (#109823)1841d54[CI] Addtorch.compileworks without numpy test (#109624) (#109818)fca4233Fix the parameter error in test_device_mesh.py (#108758) (#109826)539a971[Release-2.1]Addfinfoproperties for float8 dtypes (#109808)9287a0c[Release/2.1][JIT] Fix typed enum handling in 3.11 (#109807)c464075[release only] Docker build - Setup release specific variables (#109809)1b4161c[Release/2.1] [Docs] Fixcompiler.list_backendsinvocation (#109800)2822053[Release/2.1] [Docs] Fix typo intorch.unflatten(#109801)da9639cRemove torchtext from Build Official Docker images (#109799) (#109803)e534243Add docs for torch.compile(numpy) (#109789)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)