chore(deps): bump tensorflow-macos from 2.7.0 to 2.9.2 in /examples/tests
Bumps tensorflow-macos from 2.7.0 to 2.9.2.
Release notes
Sourced from tensorflow-macos's releases.
TensorFlow 2.9.1
Release 2.9.1
Add an upper bound for
protobufinsetup.pysinceprotobufafter version 3.20 is currently incompatible with TensorFlow. See tensorflow/tensorflow#53234, protocolbuffers/protobuf#9954 and tensorflow/tensorflow#56077.TensorFlow 2.9.0
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounterstd::__cxx11or[abi:cxx11]linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.- TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- The
tf.keras.mixed_precision.experimentalAPI has been removed. The non-experimental symbols undertf.keras.mixed_precisionhave been available since TensorFlow 2.4 and should be used instead.
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precisionsymbols. E.g., replacetf.keras.mixed_precision.experimental.global_policywithtf.keras.mixed_precision.global_policy.- Replace
tf.keras.mixed_precision.experimental.set_policywithtf.keras.mixed_precision.set_global_policy. The experimental symbolset_policywas renamed toset_global_policyin the non-experimental API.- Replace
LossScaleOptimizer(opt, "dynamic")withLossScaleOptimizer(opt). If you pass anything other than"dynamic"to the second argument, see (1) of the next section.- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"to theloss_scaleargument (the second argument) ofLossScaleOptimizer:
- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scaleargument (the second argument) ofPolicy:
- The experimental version of
Policyoptionally took in atf.compat.v1.mixed_precision.LossScalein the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"policy and no loss scale for other policies. InModel.compile, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer. With the non-experimentalPolicy, there is no loss scale associated with thePolicy, andModel.compilewraps the optimizer with aLossScaleOptimizerif and only if the policy is a"mixed_float16"policy. If you previously passed aLossScaleto the experimentalPolicy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizerbefore passing it toModel.compile.- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy:
- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)withlayer.dtype_policy.tf.mixed_precision.experimental.LossScaleand its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimentalAPI. The symbols are still available undertf.compat.v1.mixed_precision.- The
experimental_relax_shapesheuristic fortf.functionhas been deprecated and replaced withreduce_retracingwhich encompasses broader heuristics to reduce the number of retraces (see below)Major Features and Improvements
tf.keras:
- Added
tf.keras.applications.resnet_rsmodels. This includes theResNetRS50,ResNetRS101,ResNetRS152,ResNetRS200,ResNetRS270,ResNetRS350andResNetRS420model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies- Added
tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer/Adam/etc.- Added L2 unit normalization layer
tf.keras.layers.UnitNormalization.- Added
tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.- Added
tf.keras.layers.RandomBrightnesslayer for image preprocessing.- Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()to check if interactive logging is enabled.- Changed default value for the
verboseargument ofModel.evaluate()andModel.predict()to"auto", which defaults toverbose=1for most cases and defaults toverbose=2when used withParameterServerStrategyor with interactive logging disabled.- Argument
jit_compileinModel.compile()now applies toModel.evaluate()andModel.predict(). Settingjit_compile=Trueincompile()compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=Truemay not necessarily work for all models.- Added DTensor-related Keras APIs under
tf.keras.dtensornamespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
tf.lite:
- Added TFLite builtin op support for the following TF ops:
tf.math.argmin/tf.math.argmaxfor input data typetf.boolon CPU.tf.nn.geluop for output data typetf.float32and quantization on CPU.- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_itemwithDynamicUpdateSlice.
... (truncated)
Changelog
Sourced from tensorflow-macos's changelog.
Release 2.10.0
Breaking Changes
- Some files in
tensorflow/python/traininghave been moved totensorflow/python/trackingandtensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.- RNG behavior change for
tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
- Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (
seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).- An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.
Known Caveats
Major Features and Improvements
tf.lite:
- New operations supported:
- tflite SelectV2 now supports 5D.
- tf.einsum is supported with multiple unknown shapes.
- tf.unsortedsegmentprod op is supported.
- Upgrade Flatbuffers v2.0.5 from v1.12.0
tf.keras:
- Added
tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.EinsumDenselayer moved from experimental to core. Its import path moved fromtf.keras.layers.experimental.EinsumDensetotf.keras.layers.EinsumDense.- Added
tf.keras.utils.audio_dataset_from_directoryutility to easily generate audio classification datasets from directories of.wavfiles.- Added
subset="both"support intf.keras.utils.image_dataset_from_directory,
... (truncated)
Commits
- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the users @dependabot[bot] on file. In order for us to review and merge your code, please start the CLA process at https://determined.ai/cla.
After we approve your CLA, we will update the contributors list (private) and comment @cla-bot[bot] check to rerun the check.
Deploy Preview for determined-ui canceled.
| Name | Link |
|---|---|
| Latest commit | a774e81501506ac4f435d9610655d2034065af76 |
| Latest deploy log | https://app.netlify.com/sites/determined-ui/deploys/62a9ee31c4ebea0008f5399d |
@cla-bot[bot] check
The cla-bot has been summoned, and re-checked this pull request!
Superseded by #5040.