alibi-detect
alibi-detect copied to clipboard
Update torch requirement from <1.14.0,>=1.7.0 to >=1.7.0,<3.0.0
Updates the requirements on torch to permit the latest version.
Release notes
Sourced from torch's releases.
PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever
PyTorch 2.0 Release notes
- Highlights
- Backwards Incompatible Changes
- Deprecations
- New Features
- Improvements
- Bug fixes
- Performance
- Documentation
Highlights
We are excited to announce the release of PyTorch® 2.0 (release note) which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.
This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.
Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog.
This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.
Summary:
- torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
- As an underpinning technology of torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on OpenAI Triton deep learning compiler to generate performant code and hide low level hardware details. OpenAI Triton-generated kernels achieve performance that's on par with hand-written kernels and specialized cuda libraries such as cublas.
- Accelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator.
- Metal Performance Shaders (MPS) backend provides GPU accelerated PyTorch training on Mac platforms with added support for Top 60 most used ops, bringing coverage to over 300 operators.
- Amazon AWS optimize the PyTorch CPU inference on AWS Graviton3 based C7g instances. PyTorch 2.0 improves inference performance on Graviton compared to the previous releases, including improvements for Resnet50 and Bert.
- New prototype features and technologies across TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- Release Compatibility Matrix
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Special Topics
Release Compatibility Matrix
Following is the Release Compatibility Matrix for PyTorch releases:
PyTorch version Python Stable CUDA Experimental CUDA 2.0 >=3.8, <=3.11 CUDA 11.7, CUDNN 8.5.0.96 CUDA 11.8, CUDNN 8.7.0.84 1.13 >=3.7, <=3.10 CUDA 11.6, CUDNN 8.3.2.44 CUDA 11.7, CUDNN 8.5.0.96 1.12 >=3.7, <=3.10 CUDA 11.3, CUDNN 8.3.2.44 CUDA 11.6, CUDNN 8.3.2.44 General Overview
Releasing a new version of PyTorch generally entails 3 major steps:
... (truncated)
Commits
c263bd4
[inductor] use triu ref instead of lowering (#96040) (#96462)c9913cf
Add jinja2 as mandatory dependency (#95691) (#96450)2f7d8bb
Fix expired deprecation of comparison dtype for NumPy 1.24+ (#91517) (#96452)ca0cdf5
dl_open_guard
should restore flag even after exception (#96231) (#96457)9cfa076
[Release/2.0] Use Triton from PYPI (#96010)8e05e41
[Release/2.0] Use builder release branch for testsd8ffc60
Remove mention of dynamo.optimize() in docs (#95802) (#96007)1483723
[MPS] Disallow reshape in slice (#95905) (#95978)c4572aa
[MPS] Add fixes for div with floor (#95869)82b078b
[MPS] Fix views with 3 or more sliced dimensions (#95762) (#95871)- Additional commits viewable in compare view
You can trigger a rebase of this PR by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Codecov Report
Merging #758 (e267fed) into master (3992180) will decrease coverage by
0.34%
. The diff coverage isn/a
.
:exclamation: Current head e267fed differs from pull request most recent head 5a8e2ed. Consider uploading reports for the commit 5a8e2ed to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## master #758 +/- ##
==========================================
- Coverage 81.03% 80.69% -0.34%
==========================================
Files 146 144 -2
Lines 9721 9577 -144
==========================================
- Hits 7877 7728 -149
- Misses 1844 1849 +5
Flag | Coverage Δ | |
---|---|---|
ubuntu-latest-3.7 | 80.69% <ø> (?) |
Flags with carried forward coverage won't be shown. Click here to find out more.
@mauicv as you're pushing to this branch, could you change the upper version bound to <3.0
and change the title of the PR too please?
A newer version of torch exists, but since this PR has been edited by someone other than Dependabot I haven't updated it. You'll get a PR for the updated version as normal once this PR is merged.
Due to https://github.com/pytorch/pytorch/issues/97580 to unblock our CI running with torch 2.0
and tensorflow 2.12
we should see if we can force the import order to be torch
first then tensorflow
. Nothing that alibi
doesn't have this issue.
Yes please bump the pytorch to 2.0
Additional note, from the torch docs:
Currently, PyTorch on Windows only supports Python 3.8-3.11; Python 2.x is not supported.
~~torch 2.0
doesn't support Windows, so we'll have to reflect that in the Windows tests...~~ Correction, the Windows tests will simply run an older torch version...