off-nutrition-table-extractor
off-nutrition-table-extractor copied to clipboard
Bump tensorflow from 1.14.0 to 2.9.1
Bumps tensorflow from 1.14.0 to 2.9.1.
Release notes
Sourced from tensorflow's releases.
TensorFlow 2.9.1
Release 2.9.1
Add an upper bound for
protobuf
insetup.py
sinceprotobuf
after version 3.20 is currently incompatible with TensorFlow. See tensorflow/tensorflow#53234, protocolbuffers/protobuf#9954 and tensorflow/tensorflow#56077.TensorFlow 2.9.0
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.- TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
.- Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API.- Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:
- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:
- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:
- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)Major Features and Improvements
tf.keras
:
- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies- Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc.- Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
.- Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.- Added
tf.keras.layers.RandomBrightness
layer for image preprocessing.- Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled.- Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled.- Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models.- Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
tf.lite
:
- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
.
... (truncated)
Changelog
Sourced from tensorflow's changelog.
Release 2.9.1
Add an upper bound for
protobuf
insetup.py
sinceprotobuf
after version 3.20 is currently incompatible with TensorFlow. See tensorflow/tensorflow#53234, protocolbuffers/protobuf#9954 and tensorflow/tensorflow#56077.Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.- TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
.- Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API.- Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:
- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:
- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:
- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)Major Features and Improvements
tf.keras
:
- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies- Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc.- Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
.- Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.- Added
tf.keras.layers.RandomBrightness
layer for image preprocessing.- Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled.- Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled.- Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models.- Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
tf.lite
:
- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
.- Enabled a new MLIR-based dynamic range quantization backend by default
- The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
... (truncated)
Commits
d8ce9f9
Merge pull request #56214 from tensorflow/mm-disable-tests-on-r2.96235394
Disable flaky tests466ed37
Merge pull request #56203 from mseth10/r2.9-arm5dda790
trigger cd on r2.9 pushc6777ec
update skip test list for r2.91584eda
Merge pull request #56198 from tensorflow/cp-apple-on-r2.97b3588f
Merge pull request #56097 from mseth10:master-arm21bd9f7
Merge pull request #56195 from tensorflow/r2.9-60d5bfbf0243db166b
Add upper bound to protobuf in setup.py.82467d5
Merge pull request #56192 from tensorflow-jenkins/version-numbers-2.9.1-11510- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)