Hanxun Huang
Hanxun Huang
Active-Passive-Losses
[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels
RobustWRN
[NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
SCELoss-Reproduce
Reproduce Results for ICCV2019 "Symmetric Cross Entropy for Robust Learning with Noisy Labels" https://arxiv.org/abs/1908.06112
Unlearnable-Examples
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable
MDAttack
[Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness
CognitiveDistillation
[ICLR2023] Distilling Cognitive Backdoor Patterns within an Image
Detect-CLIP-Backdoor-Samples
[ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining