AGLU icon indicating copy to clipboard operation
AGLU copied to clipboard

[ECCV2024 - Oral] Adaptive Parametric Activation

Adaptive Parametric Activation

Konstantinos Panagiotis Alexandridis1, Jiankang Deng1, Anh Nguyen2, Shan Luo3,

1 Huawei Noah's Ark Lab, 2 University of Liverpool, 3 King's College London

Static Badge Static Badge

This is the official implementation of Adaptive Parametric Activation (APA) for ECCV2024 accepted paper.

APA unifies most activation functions under the same formula.

News

[2025/11/17] We have extended our work, enabling more tasks like large scale image classification using ImageNet21K, image generation using transformers, Multi-modal LLM training using LLaVA One-Vision and more. The preprint is available here. [2024/07/11]: Our paper was accepted at ECCV 2024 as an oral presentation.

Abstract

The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.

Definition

APA is defined as: $APA(z,λ,κ) = (λ exp(−κz) + 1) ^{\frac{1}{−λ}}$. APA unifies most activation functions under the same formula.

APA can be used insed the intermediate layers using Adaptive Generalised Linear Unit (AGLU): $AGLU(z,λ,κ) = z APA(z,λ,κ)$. The derivatives of AGLU with respect to κ (top), λ (middle) and z (bottom) are shown below: APA unifies most activation functions under the same formula.

Simple Code implementation

class Unified(nn.Module):
    """Unified activation function module."""

    def __init__(self, device=None, dtype=None) -> None:
        """Initialize the Unified activation function."""
        factory_kwargs = {"device": device, "dtype": dtype}
        super().__init__()
        lambda_param = torch.nn.init.uniform_(torch.empty(1, **factory_kwargs))
        kappa_param = torch.nn.init.uniform_(torch.empty(1, **factory_kwargs))
        self.softplus = nn.Softplus(beta=-1.0)
        self.lambda_param = nn.Parameter(lambda_param)
        self.kappa_param = nn.Parameter(kappa_param)

    def forward(self, input: torch.Tensor) -> torch.Tensor:
        """Compute the forward pass of the Unified activation function."""
        l = torch.clamp(self.lambda_param, min=0.0001)
        p = torch.exp((1 / l) * self.softplus((self.kappa_param * input) - torch.log(l)))
        return p  # for AGLU simply return p*input

Getting Started

  • For Image Classification, please see the README.md for detailed instructions and model weights.
  • For Object Detection and Instance Segmentation, please see the README.md for detailed instructions and model weights.

BibTeX

@inproceedings{alexandridis2024adaptive,
  title={Adaptive Parametric Activation},
  author={Alexandridis, Konstantinos Panagiotis and Deng, Jiankang and Nguyen, Anh and Luo, Shan},
  booktitle={European Conference on Computer Vision},
  pages={455--476},
  year={2024},
  organization={Springer}
}

Acknowledgements

This code uses Pytorch and the mmdet framework. Thank you for your wonderfull work!