Advancements in Adversarial Robustness and Continual Adaptation in Deep Learning

The recent developments in the research area highlight a significant focus on enhancing the robustness and adaptability of deep learning models against adversarial attacks and environmental changes. A notable trend is the exploration of novel frameworks and methodologies to improve the adversarial robustness of models, particularly in object detection and wildfire detection. This includes the application of supervised contrastive learning, standard-deviation-inspired regularization, and model-agnostic frameworks for evaluating adversarial robustness. Additionally, there is a growing interest in continual test-time adaptation strategies to maintain model plasticity over long timescales, ensuring sustained performance in non-stationary environments. The field is also witnessing advancements in theoretical frameworks for robust supervised contrastive loss against label noise and the exploration of difficult-to-learn examples in contrastive learning.

Noteworthy Papers

  • Evaluating the Adversarial Robustness of Detection Transformers: This paper provides a comprehensive evaluation of DETR models under adversarial attacks, revealing significant vulnerabilities and proposing a novel untargeted attack method.
  • Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors: Introduces a novel method to fool object detectors by perturbing object confidence scores, demonstrating high success rates in both white-box and black-box scenarios.
  • Enhancing Adversarial Robustness of Deep Neural Networks Through Supervised Contrastive Learning: Presents a framework combining supervised contrastive learning with margin-based contrastive loss to improve adversarial robustness, showing significant improvements in adversarial accuracy.
  • Standard-Deviation-Inspired Regularization for Improving Adversarial Robustness: Proposes a novel regularization term inspired by standard deviation to enhance the robustness of deep neural networks against stronger adversarial attacks.
  • Adversarial Robustness for Deep Learning-based Wildfire Detection Models: Introduces WARP, a model-agnostic framework for evaluating the adversarial robustness of DNN-based wildfire detection models, highlighting the need for model improvements through data augmentation.

Sources

Evaluating the Adversarial Robustness of Detection Transformers

Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors

An Approximated Model of Wildfire Propagation on Slope

Enhancing Adversarial Robustness of Deep Neural Networks Through Supervised Contrastive Learning

Standard-Deviation-Inspired Regularization for Improving Adversarial Robustness

Adversarial Robustness for Deep Learning-based Wildfire Detection Models

Maintain Plasticity in Long-timescale Continual Test-time Adaptation

Attacks on the neural network and defense methods

Adversarial Attack and Defense for LoRa Device Identification and Authentication via Deep Learning

Adaptive Tabu Dropout for Regularization of Deep Neural Network

Hardness of Learning Fixed Parities with Neural Networks

SPARNet: Continual Test-Time Adaptation via Sample Partitioning Strategy and Anti-Forgetting Regularization

An Inclusive Theoretical Framework of Robust Supervised Contrastive Loss against Label Noise

Understanding Difficult-to-learn Examples in Contrastive Learning: A Theoretical Framework for Spectral Contrastive Learning

Best Transition Matrix Esitimation or Best Label Noise Robustness Classifier? Two Possible Methods to Enhance the Performance of T-revision

Built with on top of