addons
addons copied to clipboard
Updated Adaptive Pooling layers
Updated Adaptive Pooling layers to be analogous to torch.nn.Adaptive*pooling layers from PyTorch.
Fixes # (issue) Adaptive Pooling Layers had constraints regarding shapes of inputs. Some of those constraints have been relaxed. Outputs are now same as those given by adaptive layers provided by PyTorch.
Type of change
- [* ] Bug fix
- [ ] New Tutorial
- [ ] Updated or additional documentation
- [ *] Additional Testing
- [ ] New Activation and the changes conform to the activation contribution guidelines
- [ ] New Callback and the changes conform to the callback contribution guidelines
- [ ] New Image addition and the changes conform to the image op contribution guidelines
- [* ] New Layer and the changes conform to the layer contribution guidelines
- [ ] New Loss and the changes conform to the loss contribution guidelines
- [ ] New Metric and the changes conform to the metric contribution guidelines
- [ ] New Optimizer and the changes conform to the optimizer contribution guidelines
- [ ] New RNN Cell and the changes conform to the rnn contribution guidelines
- [ ] New Seq2seq addition and the changes conform to the seq2seq contribution guidelines
- [ ] New Text addition and the changes conform to the text op contribution guidelines
Checklist:
- [* ] I've properly formatted my code according to the guidelines
- [* ] By running Black + Flake8
- [* ] By running pre-commit hooks
- [*] This PR addresses an already submitted issue for TensorFlow Addons
- [x] I have made corresponding changes to the documentation
- [* ] I have added tests that prove my fix is effective or that my feature works
- [ ] This PR contains modifications to C++ custom-ops
How Has This Been Tested?
If you're adding a bugfix or new feature please describe the tests that you ran to verify your changes:
- The new layers were extensively tested against their PyTorch counterparts.
Thanks for the PR. To prevent performance regression, it's better to do the following thing
def _adaptive_pooling_1d(inputs, output_size, data_format, reduction_function): def divisible_case(): # Original codes go here pass def nondivisible_case(): # This PR goes here pass input_shape = tf.shape(inputs) length = input_shape[1] if data_format == "channels_last" else input_shape[2] is_divisible = (length % output_size == 0) return tf.keras.backend.switch(is_divisible, divisible_case, nondivisible_case) class AdaptivePooling1D(tf.keras.layers.Layer): def call(self, inputs): return _adaptive_pooling_1d(inputs, self.output_size, self.data_format, self.reduction_function)For 2d and 3d, can we have the following patterns to reduce the number of reduction and slicing calls?
for i in range(output_size[0]): for j in range(output_size[1]): for k in range(output_size[2]): sliced_inputs = inputs[:, foo1:bar1, foo2:bar2, foo3:bar3, :] pooled = reduction_function(sliced_inputs, axis=[1, 2, 3])
The former can be done. The latter, however, will cause the code to slow down further, since it would independently address each piece along each axis (O(n^3)). Independent loops can access one whole axis at a time, so the time for 3D inputs would be (O(3n)), as elements in each access can be processed completely in parallel.
The former can be done. The latter, however, will cause the code to slow down further, since it would independently address each piece along each axis (O(n^3)). Independent loops can access one whole axis at a time, so the time for 3D inputs would be (O(3n)), as elements in each access can be processed completely in parallel.
Great, I am not aware of that :-)
Thank you for your contribution. We sincerely apologize for any delay in reviewing, but TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision: TensorFlow Addons Wind Down
Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA: Keras Keras-CV Keras-NLP