torchTS
torchTS copied to clipboard
Bump the pip group with 15 updates
Bumps the pip group with 15 updates:
| Package | From | To |
|---|---|---|
| torch | 1.11.0 |
1.13.1 |
| pytorch-lightning | 1.5.10 |
1.6.0 |
| scipy | 1.7.3 |
1.10.0 |
| aiohttp | 3.8.1 |
3.9.4 |
| certifi | 2021.10.8 |
2023.7.22 |
| grpcio | 1.44.0 |
1.53.2 |
| idna | 3.3 |
3.7 |
| jinja2 | 3.0.3 |
3.1.3 |
| numpy | 1.21.5 |
1.22.0 |
| oauthlib | 3.2.0 |
3.2.2 |
| protobuf | 3.19.4 |
3.19.5 |
| requests | 2.27.1 |
2.31.0 |
| setuptools | 59.5.0 |
65.5.1 |
| urllib3 | 1.26.8 |
1.26.18 |
| werkzeug | 2.0.3 |
2.3.8 |
Updates torch from 1.11.0 to 1.13.1
Release notes
Sourced from torch's releases.
PyTorch 1.13.1 Release, small bug fix release
This release is meant to fix the following issues (regressions / silent correctness):
- RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True #88669
- Installation via pip on Amazon Linux 2, regression #88869
- Installation using poetry on Mac M1, failure #88049
- Missing masked tensor documentation #89734
- torch.jit.annotations.parse_type_line is not safe (command injection) #88868
- Use the Python frame safely in _pythonCallstack #88993
- Double-backward with full_backward_hook causes RuntimeError #88312
- Fix logical error in get_default_qat_qconfig #88876
- Fix cuda/cpu check on NoneType and unit test #88854 and #88970
- Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops #88504
- Onnx operator_export_type on the new registry #87735
- torchrun AttributeError caused by file_based_local_timer on Windows #85427
The release tracker should contain all relevant pull requests related to this release as well as links to related issues
PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available
Pytorch 1.13 Release Notes
- Highlights
- Backwards Incompatible Changes
- New Features
- Improvements
- Performance
- Documentation
- Developers
Highlights
We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.
Summary:
The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.
Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.
Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to
import functorchand use functorch without needing to install another package.PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.
Stable Beta Prototype Better TransformerCUDA 10.2 and 11.3 CI/CD Deprecation Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIsExtend NNC to support channels last and bf16Functorch now in PyTorch Core LibraryBeta Support for M1 devices Arm® Compute Library backend support for AWS Graviton CUDA Sanitizer You can check the blogpost that shows the new features here.
Backwards Incompatible changes
... (truncated)
Changelog
Sourced from torch's changelog.
Releasing PyTorch
- Release Compatibility Matrix
- Release Cadence
- General Overview
- Cutting a release branch preparations
- Cutting release branches
- Running Launch Execution team Core XFN sync
- Drafting RCs (https://github.com/pytorch/pytorch/blob/main/Release Candidates) for PyTorch and domain libraries
- Preparing and Creating Final Release candidate
- Promoting RCs to Stable
- Additional Steps to prepare for release day
- Patch Releases
- Hardware / Software Support in Binary Build Matrix
- Submitting Tutorials
- Special Topics
Release Compatibility Matrix
Following is the Release Compatibility Matrix for PyTorch releases:
... (truncated)
Commits
49444c3[BE] Do not package caffe2 in wheel (#87986) (#90433)56de8a3Add manual cuda deps search logic (#90411) (#90426)a4d16e0Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (#88504) (#90104)80abad3Handle Tensor.deepcopy via clone(), on IPU (#89129) (#89999)73a852a[Release only change] Fix rocm5.1.1 docker image (#90321)029ec16Add platform markers for linux only extra_install_requires (#88826) (#89924)197c5c0Fix cuda/cpu check on NoneType (#88854) (#90068)aadbeb7Make TorchElastic timer importable on Windows (#88522) (#90045)aa94433Mark IPU device as not supports_as_strided (#89130) (#89998)59b4f3bUse the Python frame safely in _pythonCallstack (#89997)- Additional commits viewable in compare view
Updates pytorch-lightning from 1.5.10 to 1.6.0
Release notes
Sourced from pytorch-lightning's releases.
PyTorch Lightning 1.6: Support Intel's Habana Accelerator, New efficient DDP strategy (Bagua), Manual Fault-tolerance, Stability and Reliability.
The core team is excited to announce the PyTorch Lightning 1.6 release ⚡
Highlights
PyTorch Lightning 1.6 is the work of 99 contributors who have worked on features, bug-fixes, and documentation for a total of over 750 commits since 1.5. This is our most active release yet. Here are some highlights:
Introducing Intel's Habana Accelerator
Lightning 1.6 now supports the Habana® framework, which includes Gaudi® AI training processors. Their heterogeneous architecture includes a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries and a configurable Matrix Math engine.
You can leverage the Habana hardware to accelerate your Deep Learning training workloads simply by passing:
trainer = pl.Trainer(accelerator="hpu")single Gaudi training
trainer = pl.Trainer(accelerator="hpu", devices=1)
distributed training with 8 Gaudi
trainer = pl.Trainer(accelerator="hpu", devices=8)
The Bagua Strategy
The Bagua Strategy is a deep learning acceleration framework that supports multiple, advanced distributed training algorithms with state-of-the-art system relaxation techniques. Enabling Bagua, which can be considerably faster than vanilla PyTorch DDP, is as simple as:
trainer = pl.Trainer(strategy="bagua")or to choose a custom algorithm
trainer = pl.Trainer(strategy=BaguaStrategy(algorithm="gradient_allreduce") # default
Towards stable Accelerator, Strategy, and Plugin APIs
The
Accelerator,Strategy, andPluginAPIs are a core part of PyTorch Lightning. They're where all the distributed boilerplate lives, and we're constantly working to improve both them and the overall PyTorch Lightning platform experience.In this release, we've made some large changes to achieve that goal. Not to worry, though! The only users affected by these changes are those who use custom implementations of Accelerator and Strategy (
TrainingTypePlugin) as well as certain Plugins. In particular, we want to highlight the following changes:
- All
TrainingTypePlugins have been renamed toStrategy(#11120). Strategy is a more appropriate name because it encompasses more than simply training communcation. This change is now aligned with the changes we implemented in 1.5, which introduced the newstrategyanddevicesflags to the Trainer.
... (truncated)
Changelog
Sourced from pytorch-lightning's changelog.
[1.6.0] - 2022-03-29
Added
- Allow logging to an existing run ID in MLflow with
MLFlowLogger(#12290)- Enable gradient accumulation using Horovod's
backward_passes_per_step(#11911)- Add new
DETAILlog level to provide useful logs for improving monitoring and debugging of batch jobs (#11008)- Added a flag
SLURMEnvironment(auto_requeue=True|False)to control whether Lightning handles the requeuing (#10601)- Fault Tolerant Manual
- Add
_Statefulprotocol to detect if classes are stateful (#10646)- Add
_FaultTolerantModeenum used to track different supported fault tolerant modes (#10645)- Add a
_rotate_worker_indicesutility to reload the state according the latest worker (#10647)- Add stateful workers (#10674)
- Add an utility to collect the states across processes (#10639)
- Add logic to reload the states across data loading components (#10699)
- Cleanup some fault tolerant utilities (#10703)
- Enable Fault Tolerant Manual Training (#10707)
- Broadcast the
_terminate_gracefullyto all processes and add support for DDP (#10638)- Added support for re-instantiation of custom (subclasses of)
DataLoadersreturned in the*_dataloader()methods, i.e., automatic replacement of samplers now works with custom types ofDataLoader(#10680)- Added a function to validate if fault tolerant training is supported. (#10465)
- Added a private callback to manage the creation and deletion of fault-tolerance checkpoints (#11862)
- Show a better error message when a custom
DataLoaderimplementation is not well implemented and we need to reconstruct it (#10719)- Show a better error message when frozen dataclass is used as a batch (#10927)
- Save the
Loop's state by default in the checkpoint (#10784)- Added
Loop.replaceto easily switch one loop for another (#10324)- Added support for
--lr_scheduler=ReduceLROnPlateauto theLightningCLI(#10860)- Added
LightningCLI.configure_optimizersto override theconfigure_optimizersreturn value (#10860)- Added
LightningCLI(auto_registry)flag to register all subclasses of the registerable components automatically (#12108)- Added a warning that shows when
max_epochsin theTraineris not set (#10700)- Added support for returning a single Callback from
LightningModule.configure_callbackswithout wrapping it into a list (#11060)- Added
console_kwargsforRichProgressBarto initialize inner Console (#10875)- Added support for shorthand notation to instantiate loggers with the
LightningCLI(#11533)- Added a
LOGGER_REGISTRYinstance to register custom loggers to theLightningCLI(#11533)- Added info message when the
Trainerargumentslimit_*_batches,overfit_batches, orval_check_intervalare set to1or1.0(#11950)- Added a
PrecisionPlugin.teardownmethod (#10990)- Added
LightningModule.lr_scheduler_step(#10249)- Added support for no pre-fetching to
DataFetcher(#11606)- Added support for optimizer step progress tracking with manual optimization (#11848)
- Return the output of the
optimizer.step. This can be useful forLightningLiteusers, manual optimization users, or users overridingLightningModule.optimizer_step(#11711)- Teardown the active loop and strategy on exception (#11620)
- Added a
MisconfigurationExceptionif user providedopt_idxin scheduler config doesn't match with actual optimizer index of its respective optimizer (#11247)- Added a
loggersproperty toTrainerwhich returns a list of loggers provided by the user (#11683)- Added a
loggersproperty toLightningModulewhich retrieves theloggersproperty fromTrainer(#11683)- Added support for DDP when using a
CombinedLoaderfor the training data (#11648)- Added a warning when using
DistributedSamplerduring validation/testing (#11479)- Added support for
Baguatraining strategy (#11146)- Added support for manually returning a
poptorch.DataLoaderin a*_dataloaderhook (#12116)- Added
rank_zeromodule to centralize utilities (#11747)- Added a
_Statefulsupport forLightningDataModule(#11637)- Added
_Statefulsupport forPrecisionPlugin(#11638)
... (truncated)
Commits
44e3edbCleanup CHANGELOG (#12507)e3893b9Merge pull request #12509 from RobertLaurella/patch-1041da41Remove TPU Availability check from parse devices (#12326)4fe0076Prepare for the 1.6.0 release17215edFix titles capitalization in docsa775804Update Plugins doc (#12440)71e25f3Update CI in README.md (#12495)c6cb634Add usage of Jupyter magic command for loggers (#12333)42169a2Add typing toLightningModule.trainer(#12345)2de6a9bFix warning message formatting in save_hyperparameters (#12498)- Additional commits viewable in compare view
Updates scipy from 1.7.3 to 1.10.0
Release notes
Sourced from scipy's releases.
SciPy 1.10.0 Release Notes
SciPy
1.10.0is the culmination of6months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code withpython -Wdand check forDeprecationWarnings). Our development attention will now shift to bug-fix releases on the 1.10.x branch, and on adding new features on the main branch.This release requires Python
3.8+and NumPy1.19.5or greater.For running on PyPy, PyPy3
6.0+is required.Highlights of this release
- A new dedicated datasets submodule (
scipy.datasets) has been added, and is now preferred over usage ofscipy.miscfor dataset retrieval.- A new
scipy.interpolate.make_smoothing_splinefunction was added. This function constructs a smoothing cubic spline from noisy data, using the generalized cross-validation (GCV) criterion to find the tradeoff between smoothness and proximity to data points.scipy.statshas three new distributions, two new hypothesis tests, three new sample statistics, a class for greater control over calculations involving covariance matrices, and many other enhancements.New features
scipy.datasetsintroduction
- A new dedicated
datasetssubmodule has been added. The submodules is meant for datasets that are relevant to other SciPy submodules ands content (tutorials, examples, tests), as well as contain a curated set of datasets that are of wider interest. As of this release, all the datasets fromscipy.mischave been added toscipy.datasets(and deprecated inscipy.misc).- The submodule is based on Pooch (a new optional dependency for SciPy), a Python package to simplify fetching data files. This move will, in a subsequent release, facilitate SciPy to trim down the sdist/wheel sizes, by decoupling the data files and moving them out of the SciPy repository, hosting them externally and
... (truncated)
Commits
dde5059REL: 1.10.0 final [wheel build]7856f28Merge pull request #17696 from tylerjereddy/treddy_110_final_prep205b624DOC: add missing author1ab9f1bDOC: update 1.10.0 relnotesac2f45fMAINT: integrate._qmc_quad: mark as private with preceding underscore3e0ae1aREV: integrate.qmc_quad: delay release to SciPy 1.11.034cdf05MAINT: FFT pybind11 fixups843500aMerge pull request #17689 from mdhaber/gh17686089924bREL: integrate.qmc_quad: remove from release notes3e47110REL: 1.10.0rc3 unreleased- Additional commits viewable in compare view
Updates aiohttp from 3.8.1 to 3.9.4
Release notes
Sourced from aiohttp's releases.
3.9.4
Bug fixes
The asynchronous internals now set the underlying causes when assigning exceptions to the future objects -- by :user:
webknjaz.Related issues and pull requests on GitHub: #8089.
Treated values of
Accept-Encodingheader as case-insensitive when checking for gzip files -- by :user:steverep.Related issues and pull requests on GitHub: #8104.
Improved the DNS resolution performance on cache hit -- by :user:
bdraco.This is achieved by avoiding an :mod:
asynciotask creation in this case.Related issues and pull requests on GitHub: #8163.
Changed the type annotations to allow
dicton :meth:aiohttp.MultipartWriter.append, :meth:aiohttp.MultipartWriter.append_jsonand :meth:aiohttp.MultipartWriter.append_form-- by :user:cakemannyRelated issues and pull requests on GitHub: #7741.
Ensure websocket transport is closed when client does not close it -- by :user:
bdraco.The transport could remain open if the client did not close it. This change ensures the transport is closed when the client does not close it.
... (truncated)
Changelog
Sourced from aiohttp's changelog.
3.9.4 (2024-04-11)
Bug fixes
The asynchronous internals now set the underlying causes when assigning exceptions to the future objects -- by :user:
webknjaz.Related issues and pull requests on GitHub: :issue:
8089.Treated values of
Accept-Encodingheader as case-insensitive when checking for gzip files -- by :user:steverep.Related issues and pull requests on GitHub: :issue:
8104.Improved the DNS resolution performance on cache hit -- by :user:
bdraco.This is achieved by avoiding an :mod:
asynciotask creation in this case.Related issues and pull requests on GitHub: :issue:
8163.Changed the type annotations to allow
dicton :meth:aiohttp.MultipartWriter.append, :meth:aiohttp.MultipartWriter.append_jsonand :meth:aiohttp.MultipartWriter.append_form-- by :user:cakemannyRelated issues and pull requests on GitHub: :issue:
7741.Ensure websocket transport is closed when client does not close it -- by :user:
bdraco.The transport could remain open if the client did not close it. This change ensures the transport is closed when the client does not close it.
... (truncated)
Commits
b3397c7Release v3.9.4 (#8201)a7e240a[PR #8320/9ba9a4e5 backport][3.9] Fix Python parser to mark responses without...2833552Escape filenames and paths in HTML when generating index pages (#8317) (#8319)ed43040[PR #8309/c29945a1 backport][3.9] Improve reliability of run_app test (#8315)ec2be05[PR #8299/28d026eb backport][3.9] Create marker for internal tests (#8307)292d961[PR #8304/88c80c14 backport][3.9] Check for backports in CI (#8305)cebe526Fix handling of multipart/form-data (#8280) (#8302)270ae9c[PR #8297/d15f07cf backport][3.9] Upgrade to llhttp 9.2.1 (#8292) (#8298)bb23105[PR #8283/54e13b0a backport][3.9] Fix blocking I/O in the event loop while pr...3f79241[PR #8286/28f1fd88 backport][3.9] docs: remove repetitive word in comment (#8...- Additional commits viewable in compare view
Updates certifi from 2021.10.8 to 2023.7.22
Commits
8fb96ed2023.07.22afe7722Bump actions/setup-python from 4.6.1 to 4.7.0 (#230)2038739Bump dessant/lock-threads from 3.0.0 to 4.0.1 (#229)44df761Hash pin Actions and enable dependabot (#228)8b3d7ba2023.05.0753da240ci: Add Python 3.12-dev to the testing (#224)c2fc3b1Create a Security Policy (#222)c211ef4Set up permissions to github workflows (#218)2087de5Don't let deprecation warning fail CI (#219)e0b9fc5remove paragraphs about 1024-bit roots from README- Additional commits viewable in compare view
Updates grpcio from 1.44.0 to 1.53.2
Release notes
Sourced from grpcio's releases.
Release v1.53.2
This is release gRPC Core 1.53.2 (glockenspiel).
For gRPC documentation, see grpc.io. For previous releases, see Releases.
This release contains refinements, improvements, and bug fixes.
Core
- [backport][iomgr][EventEngine] Improve server handling of file descriptor exhaustion by
@drfloobin grpc/grpc#33672Release v1.53.1
This is release gRPC Core 1.53.1 (glockenspiel).
For gRPC documentation, see grpc.io. For previous releases, see Releases.
This release contains refinements, improvements, and bug fixes.
- Fixed CVE-2023-32731
- Fixed CVE-2023-32732
Release v1.53.0
This is release 1.53.0 (glockenspiel) of gRPC Core.
For gRPC documentation, see grpc.io. For previous releases, see Releases.
This release contains refinements, improvements, and bug fixes, with highlights listed below.
Core
- xDS: fix crash when removing the last endpoint from the last locality in weighted_target. (#32592)
- filter stack: pass peer name up via recv_initial_metadata batch. (#31933)
- [EventEngine] Add advice against blocking work in callbacks. (#32397)
- [http2] Dont drop connections on metadata limit exceeded. (#32309)
- xDS: reject aggregate cluster with empty cluster list. (#32238)
- Fix Python epoll1 Fork Support. (#32196)
- server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter. (#32106)
- [EventEngine] Add invalid handle types to the public API. (#32202)
- [EventEngine] Refactoring the EventEngine Test Suite: Part 1. (#32127)
- xDS: fix WeightedClusters total weight handling. (#32134)
C++
... (truncated)
Changelog
Sourced from grpcio's changelog.
gRPC Release Schedule
Below is the release schedule for gRPC Java, Go and Core and its dependent languages C++, C#, Objective-C, PHP, Python and Ruby.
Releases are scheduled every six weeks on Tuesdays on a best effort basis. In some unavoidable situations a release may be delayed or released early or a language may skip a release altogether and do the next release to catch up with other languages. See the past releases in the links above. A six-week cycle gives us a good balance between delivering new features/fixes quickly and keeping the release overhead low.
The gRPC release support policy can be found here.
Releases are cut from release branches. For Core and Java repos, the release branch is cut two weeks before the scheduled release date. For Go, the branch is cut just before the release. An RC (release candidate) is published for Core and its dependent languages just after the branch cut. This RC is later promoted to release version if no further changes are made to the release branch. We do our best to keep head of master branch stable at all times regardless of release schedule. Daily build packages from master branch for C#, PHP, Python, Ruby and Protoc plugins are published on packages.grpc.io. If you depend on gRPC in production we recommend to set up your CI system to test the RCs and, if possible, the daily builds.
Names of gRPC releases are here.
Release Scheduled Branch Cut Scheduled Release Date v1.17.0 Nov 19, 2018 Dec 4, 2018 v1.18.0 Jan 2, 2019 Jan 15, 2019 v1.19.0 Feb 12, 2019 Feb 26, 2019 v1.20.0 Mar 26, 2019 Apr 9, 2019 v1.21.0 May 7, 2019 May 21, 2019 v1.22.0 Jun 18, 2019 Jul 2, 2019 v1.23.0 Jul 30, 2019 Aug 13, 2019 v1.24.0 Sept 10, 2019 Sept 24, 2019 v1.25.0 Oct 22, 2019 Nov 5, 2019 v1.26.0 Dec 3, 2019 Dec 17, 2019 v1.27.0 Jan 14, 2020 Jan 28, 2020 v1.28.0 Feb 25, 2020 Mar 10, 2020 v1.29.0 Apr 7, 2020 Apr 21, 2020 v1.30.0 May 19, 2020 Jun 2, 2020 v1.31.0 Jul 14, 2020 Jul 28, 2020 v1.32.0 Aug 25, 2020 Sep 8, 2020 v1.33.0 Oct 6, 2020 Oct 20, 2020 v1.34.0 Nov 17, 2020 Dec 1, 2020 v1.35.0 Dec 29, 2020 Jan 12, 2021 v1.36.0 Feb 9, 2021 Feb 23, 2021 v1.37.0 Mar 23, 2021 Apr 6, 2021 v1.38.0 May 4, 2021 May 18, 2021 v1.39.0 Jun 15, 2021 Jun 29, 2021 v1.40.0 Jul 27, 2021 Aug 10, 2021 v1.41.0 Sep 7, 2021 Sep 21, 2021 v1.42.0 Oct 19, 2021 Nov 2, 2021 v1.43.0 Nov 30, 2021 Dec 14, 2021
Commits
afb307f[v1.53.x][Interop] Backport Python image update (#33864)7a9373b[Backport] [dependency] Restrict cython to less than 3.X (#33770)fdb64a6[v1.53][Build] Update Phusion baseimage (#33767) (#33836)cdf4186[PSM Interop] Legacy tests: fix xDS test client build (v1.53.x backport) (#33...ce5b93a[PSM Interop] Legacy test builds always pull the driver from master (v1.53.x ...b24b6ea[release] Bump release version to 1.53.2 (#33709)1e86ca5[backport][iomgr][EventEngine] Improve server handling of file descriptor exh...aff3066[PSM interop] Don't fail url_map target if sub-target already failed (v1.53.x...539d75c[PSM interop] Don't fail target if sub-target already failed (#33222) (v1.53....3e79c88[Release] Bump version to 1.53.1 (on v1.53.x branch) (#33047)- Additional commits viewable in compare view
Updates idna from 3.3 to 3.7
Release notes
Sourced from idna's releases.
v3.7
What's Changed
- Fix issue where specially crafted inputs to encode() could take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
Full Changelog: https://github.com/kjd/idna/compare/v3.6...v3.7
Changelog
Sourced from idna's changelog.
3.7 (2024-04-11) ++++++++++++++++
- Fix issue where specially crafted inputs to encode() could take exceptionally long amount of time to process. [CVE-2024-3651]
Thanks to Guido Vranken for reporting the issue.
3.6 (2023-11-25) ++++++++++++++++
- Fix regression to include tests in source distribution.
3.5 (2023-11-24) ++++++++++++++++
- Update to Unicode 15.1.0
- String codec name is now "idna2008" as overriding the system codec "idna" was not working.
- Fix typing error for codec encoding
- "setup.cfg" has been added for this release due to some downstream lack of adherence to PEP 517. Should be removed in a future release so please prepare accordingly.
- Removed reliance on a symlink for the "idna-data" tool to comport with PEP 517 and the Python Packaging User Guide for sdist archives.
- Added security reporting protocol for project
Thanks Jon Ribbens, Diogo Teles Sant'Anna, Wu Tingfeng for contributions to this release.
3.4 (2022-09-14) ++++++++++++++++
- Update to Unicode 15.0.0
- Migrate to pyproject.toml for build information (PEP 621)
- Correct another instance where generic exception was raised instead of IDNAError for malformed input
- Source distribution uses zeroized file ownership for improved reproducibility
Thanks to Seth Michael Larson for contributions to this release.
Commits
1d365e1Release v3.7c1b3154Merge pull request #172 from kjd/optimize-contextj0394ec7Merge branch 'master' into optimize-contextjcd58a23Merge pull request #152 from elliotwutingfeng/dev5beb28bMore efficient resolution of joiner contexts1b12148Update ossf/scorecard-action to v2.3.1d516b87Update Github actions/checkout to v4c095c75Merge branch 'master' into dev60a0a4cFix typo in GitHub Actions workflow key5918a0eMerge branch 'master' into dev- Additional commits viewable in compare view
Updates jinja2 from 3.0.3 to 3.1.3
Release notes
Sourced from jinja2's releases.
3.1.3
This is a fix release for the 3.1.x feature branch.
- Fix for GHSA-h5c8-rqwp-cp95. You are affected if you are using
xmlattrand passing user input as attribute keys.- Changes: https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-3
- Milestone: https://github.com/pallets/jinja/milestone/15?closed=1
3.1.2
This is a fix release for the 3.1.0 feature release.
- Changes: https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-2
- Milestone: https://github.com/pallets/jinja/milestone/13?closed=1
3.1.1
- Changes: https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-1
- Milestone: https://github.com/pallets/jinja/milestone/12?closed=1
3.1.0
This is a feature release, which includes new features and removes previously deprecated features. The 3.1.x branch is now the supported bugfix branch, the 3.0.x branch has become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as pip-tools to pin all dependencies and control upgrades. We also encourage upgrading to MarkupSafe 2.1.1, the latest version at this time.
Changelog
Sourced from jinja2's changelog.
Version 3.1.3
Released 2024-01-10
- Fix compiler error when checking if required blocks in parent templates are empty. :pr:
1858xmlattrfilter does not allow keys with spaces. GHSA-h5c8-rqwp-cp95- Make error messages stemming from invalid nesting of
{% trans %}blocks more helpful. :pr:1918Version 3.1.2
Released 2022-04-28
- Add parameters to
Environment.overlayto match__init__. :issue:1645- Handle race condition in
FileSystemBytecodeCache. :issue:1654Version 3.1.1
Released 2022-03-25
- The template filename on Windows uses the primary path separator. :issue:
1637Version 3.1.0
Released 2022-03-24
Drop support for Python 3.6. :pr:
1534Remove previously deprecated code. :pr:
1544
WithExtensionandAutoEscapeExtensionare built-in now.contextfilterandcontextfunctionare replaced bypass_context.evalcontextfilterandevalcontextfunctionare replaced bypass_eval_context.environmentfilterandenvironmentfunctionare replaced bypass_environment.Markupandescapeshould be imported from MarkupSafe.- Compiled templates from very old Jinja versions may need to be recompiled.
- Legacy resolve mode for
Contextsubclasses is no longer supported. Overrideresolve_or_missinginstead of
... (truncated)
Commits
d9de4bbrelease version 3.1.3- Description%20has%20been%20truncated%0A" rel="nofollow" target="_blank" >
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 35.64%. Comparing base (
93597ca) to head (c706241).
Additional details and impacted files
@@ Coverage Diff @@
## main #325 +/- ##
=======================================
Coverage 35.64% 35.64%
=======================================
Files 8 8
Lines 491 491
=======================================
Hits 175 175
Misses 316 316
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.