time_series_anomaly_detection_examples
time_series_anomaly_detection_examples copied to clipboard
Bump tensorflow from 2.2.0rc2 to 2.15.0.post1 in /python
Bumps tensorflow from 2.2.0rc2 to 2.15.0.post1.
Release notes
Sourced from tensorflow's releases.
TensorFlow 2.15.0
Release 2.15.0
TensorFlow
Breaking Changes
tf.types.experimental.GenericFunctionhas been renamed totf.types.experimental.PolymorphicFunction.Major Features and Improvements
oneDNN CPU performance optimizations Windows x64 & x86.
- Windows x64 & x86 packages:
- oneDNN optimizations are enabled by default on X86 CPUs
- To explicitly enable or disable oneDNN optimizations, set the environment variable
TF_ENABLE_ONEDNN_OPTSto1(enable) or0(disable) before running TensorFlow. To fall back to default settings, unset the environment variable.- oneDNN optimizations can yield slightly different numerical results compared to when oneDNN optimizations are disabled due to floating-point round-off errors from different computation approaches and orders.
- To verify if oneDNN optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.
Making the
tf.functiontype system fully available:
tf.types.experimental.TraceTypenow allows custom tf.function inputs to declare Tensor decomposition and type casting support.- Introducing
tf.types.experimental.FunctionTypeas the comprehensive representation of the signature oftf.functioncallables. It can be accessed through thefunction_typeproperty oftf.functions andConcreteFunctions. See thetf.types.experimental.FunctionTypedocumentation for more details.Introducing
tf.types.experimental.AtomicFunctionas the fastest way to perform TF computations in Python.
- Can be accessed through
inference_fnproperty ofConcreteFunctions- Does not support gradients.
- See
tf.types.experimental.AtomicFunctiondocumentation for how to call and use it.
tf.data:
- Moved option
warm_startfromtf.data.experimental.OptimizationOptionstotf.data.Options.
tf.lite:
sub_opandmul_opsupport broadcasting up to 6 dimensions.The
tflite::SignatureRunnerclass, which provides support for named parameters and for multiple named computations within a single TF Lite model, is no longer considered experimental. Likewise for the following signature-related methods oftflite::Interpreter:
tflite::Interpreter::GetSignatureRunnertflite::Interpreter::signature_keystflite::Interpreter::signature_inputstflite::Interpreter::signature_outputstflite::Interpreter::input_tensor_by_signaturetflite::Interpreter::output_tensor_by_signatureSimilarly, the following signature runner functions in the TF Lite C API are no longer considered experimental:
... (truncated)
Changelog
Sourced from tensorflow's changelog.
Release 2.15.0.post1
TensorFlow
Bug Fixes and Other Changes
- Hot-fix was needed for an issue affecting the TensorFlow installation process.
- TensorFlow 2.15.0 Python package was requesting
tensorrt-related packages that cannot be found unless the user installs them beforehand or provides additional installation flags.- This dependency affected anyone installing TensorFlow 2.15 alongside NVIDIA CUDA dependencies via
pip install tensorflow[and-cuda].- Depending on the installation method, TensorFlow 2.14 would be installed instead of 2.15, or users could receive an installation error due to those missing dependencies.
- TensorFlow 2.15.0.post1 is being released for Linux x86_64 to resolve this issue as quickly as possible.
- This version removes the
tensorrtPython package dependencies from the tensorflow[and-cuda] installation method to ensurepip install tensorflow[and-cuda]works as originally intended for TensorFlow 2.15.- Support for TensorRT is otherwise unaffected as long as TensorRT is already installed on the system.
- Using .post1 instead of a full minor release allowed us to push this release out quickly. However, please note the following caveat:
- For users wishing to pin their Python dependency in a requirements file or other situation, under Python's version specification rules,
tensorflow[and-cuda]==2.15.0will not install this fixed version. Please use==2.15.0.post1to specify this exact version on Linux platforms, or a fuzzy version specification, such as==2.15.*, to specify the most recent compatible version of TensorFlow 2.15 on all platforms.Release 2.15.0
TensorFlow
Breaking Changes
tf.types.experimental.GenericFunctionhas been renamed totf.types.experimental.PolymorphicFunction.Known Caveats
Major Features and Improvements
oneDNN CPU performance optimizations Windows x64 & x86.
- Windows x64 & x86 packages:
- oneDNN optimizations are enabled by default on X86 CPUs
... (truncated)
Commits
- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)