addons icon indicating copy to clipboard operation
addons copied to clipboard

TensorFlow Addons Wind Down

Open seanpmorgan opened this issue 1 year ago • 13 comments

Dear contributors and users of TensorFlow Addons,

As many of you know, TensorFlow Addons (TFA) is a repository of community maintained and contributed extensions for TensorFlow, first created in 2018 and maintained by the SIG-Addons community. Over the course of 4 years, 200 contributors have built the TFA repository into a community owned and managed success that is being utilized by over 8,000 github repositories according to our dependency graph. I’d like to take a moment to sincerely thank everyone involved as a contributor or community member for their efforts.

Recently, there has been increasing overlap in contributions and scope between TFA and the Keras-CV and Keras-NLP libraries. To prevent future overlap, we believe that new and existing addons to TensorFlow will be best maintained in Keras project repositories, where possible.

Decision to Wind Down TensorFlow Addons

We believe that it is in the best interest of the TensorFlow community to consolidate where TensorFlow extensions can be utilized, maintained and contributed. Because of this, it is bittersweet that we are announcing our plans to move TensorFlow Addons to a minimal maintenance and release mode.

TFA SIG Addons will be ending development and introduction of new features to this repository. TFA will be transitioning to a minimal maintenance and release mode for one year in order to give appropriate time for you to adjust any dependencies to the overlapping repositories in our TensorFlow community (Keras, Keras-CV, and Keras-NLP). Going forward, please consider contributing to the Keras-CV and Keras-NLP projects.

Background:

The original RFC proposal for TFA was dated 2018-12-14 with the stated goal of building a community managed repository for contributions that conform to well-established API patterns, but implement new functionality not available in core TensorFlow as defined in our Special Interest Group (SIG) charter.

As the years have progressed, new repositories with healthy contributor communities (Keras-CV, Keras-NLP, etc.) have been created with similar goals to ours and the criteria for contribution acceptance overlaps significantly (e.g. number of required citations). Additionally, since Keras split out of core TensorFlow in 2020, the barrier for community contribution has been substantially lowered.

Understandably, there has been increasing ambiguity regarding where contributions should land and where they will be best maintained. Many features that are available in TFA are simultaneously available in other TensorFlow Community repositories. As just a few examples:

Random Cutout: TFA & Keras-CV AdamW Optimizer: TFA & Keras Multihead Attention: TFA & Keras

As part of the original RFC, our Special Interest Group agreed to migrate code from tf.contrib and keras.contrib repositories. In doing so, TFA inherited C++ custom-ops, which made TFA a unique place in the TensorFlow community to contribute C++ custom ops to be built and distributed. However, we’ve recently helped in migrating much of that infrastructure to Keras-CV so that they can compile and distribute custom ops as they see fit.

What’s Next:

  • One year of maintenance and releases for TFA
  • Addition of warnings into the next 0.20 TFA release
  • Creating a public document to analyze where overlap already occurs for TFA features with the other repositories.
  • GitHub Tracking Project

seanpmorgan avatar Feb 10 '23 01:02 seanpmorgan

As with everything else on this community led repository... please share your thoughts, concerns, etc. so we can facilitate a healthy discussion.

seanpmorgan avatar Feb 10 '23 01:02 seanpmorgan

@seanpmorgan yeah this is a hard decision but we always wanted to reduce those overlaps, and this makes sense. I just want to take a moment to thank you, the other maintainers, and the community members for their efforts! 🍻

AakashKumarNain avatar Feb 10 '23 03:02 AakashKumarNain

First, I wanted to thank the maintainers of TensorFlow Addons for their job, I have been using TF Addons happily for several years now! Thanks!

I use the following functionality of TF Addons that are currently not in any TF/Keras package, as far as I know:

  • the LazyAdam: for NLP tasks with large embedding matrices and small batch sizes, LazyAdam seems to deliver better performance than Adam (measured across ~5 NLU tasks).

    The ideal approach here is probably to add the support for lazy=False/True directly to the tf.keras.optimizers, if the Keras authors would agree.

  • the seq2seq module: Keras NLP has keras_nlp.samplers, which provides the decoding; but it was nice having the BasicDecoder together with prepared attention mechanisms; but maybe there are plans of adding it to keras_nlp.

  • the CRF layer: I am using tfa.text.crf_log_likelihood and tfa.text.crf_decode in a few projects; hopefully some kind of CRF will be added to keras_nlp too.

foxik avatar Mar 17 '23 15:03 foxik

Sorry I am newbie but please put "InstanceNormalization" into normal tf layers, is this okay?

ayaderaghul avatar May 22 '23 12:05 ayaderaghul

@ayaderaghul

https://www.tensorflow.org/api_docs/python/tf/keras/layers/GroupNormalization

Relation to Instance Normalization: If the number of groups is set to the input dimension (number of groups is equal to number of channels), then this operation becomes identical to Instance Normalization.

MarkDaoust avatar May 24 '23 21:05 MarkDaoust

@ayaderaghul

https://www.tensorflow.org/api_docs/python/tf/keras/layers/GroupNormalization

Relation to Instance Normalization: If the number of groups is set to the input dimension (number of groups is equal to number of channels), then this operation becomes identical to Instance Normalization.

So, does it mean that in the case of color image (number of channels = 3) ,we set the number of groups to 3?

ayaderaghul avatar May 25 '23 03:05 ayaderaghul

@ayaderaghul that's what it sounds like to me.

MarkDaoust avatar May 30 '23 20:05 MarkDaoust

I have a project using pix2pix in tensorflow_examples, similar to this: (https://www.tensorflow.org/tutorials/generative/cyclegan)

In this project, the model is pix2pix, with InstanceNorm

generator_g = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')

When I train the model, save the model, and then continue to train, it returns an error saying something like "No InstanceNormalization". I have to install tensorflow_addons, add the InstanceNormalization like a custom object to load the model.

I have tried the following alternatives but they don't work:

  • copy the InstanceNormalization directly from tensorflow_examples, to add into the loaded model as custom object
  • use GroupNormalization as custom object Both alternatives return the same error "No InstanceNormalization". So this code requires specifically the InstanceNormalization defined by tensorflow_addons.

Please help!

ayaderaghul avatar May 31 '23 01:05 ayaderaghul

Is there a replacement for tfa.optimizers.CyclicalLearningRate?

Searching the internet turned up bckenstler/CLR, which is many years old, and its newer fork brianmanderson/Cyclical_Learning_Rate, which is also two years old, but no officially preferred solution.

CLR isn't that complicated, so if it's not part of any actively maintained package, I'll probably just grab a local copy of the class into my project tree.

Technologicat avatar Jul 31 '23 08:07 Technologicat

Will the public document indicate how to replace each functions of tfa with alternatives of keras ? Example: RectifiedAdam optimizers , Lookahead optimizer

jazzycap avatar Sep 12 '23 14:09 jazzycap

Is the MultiOptimizer available elsewhere?

sainathadapa avatar Nov 09 '23 16:11 sainathadapa

Hi,

Is there any equivalent to dense_image_warp in KerasCV/Keras?

Doniach28 avatar Dec 19 '23 13:12 Doniach28

TFA meant a lot to me. Thank you.

hansk0812 avatar Mar 26 '24 11:03 hansk0812