Advances in Computer Vision Robustness

The field of computer vision is moving towards improving the robustness of deep learning models. Recent studies have focused on disentangling spatial and channel mixing operations, leading to discoveries that random, fixed spatial mixing can achieve similar performance to learned mixing, while also providing increased robustness to adversarial perturbations. Additionally, there is a growing emphasis on evaluating and improving model robustness beyond accuracy, with research exploring metrics such as fairness, calibration, and worst-class certified robustness. Noteworthy papers include: Beyond Accuracy, which introduces the QUBA score for evaluating model quality beyond accuracy. EasyRobust, which provides a comprehensive toolkit for training and evaluating robust vision models. Feature Statistics with Uncertainty, which proposes a robustness enhancement module that reconstructs attacked examples and calibrates shifted distributions. Principal Eigenvalue Regularization, which optimizes the largest eigenvalue of the smoothed confusion matrix to enhance worst-class accuracy. k-NN as a Simple and Effective Estimator of Transferability, which finds that a simple k-nearest neighbor evaluation surpasses existing transferability metrics. Stop Walking in Circles, which introduces a method for early termination of Projected Gradient Descent, substantially speeding up robustness evaluation. Efficient Verified Machine Unlearning For Distillation, which proposes a novel framework for verified machine unlearning in teacher-student knowledge distillation settings.

Sources

Rethinking the Role of Spatial Mixing

EasyRobust: A Comprehensive and Easy-to-use Toolkit for Robust and Generalized Vision

Beyond Accuracy: What Matters in Designing Well-Behaved Models?

Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers

k-NN as a Simple and Effective Estimator of Transferability

Stop Walking in Circles! Bailing Out Early in Projected Gradient Descent

Feature Statistics with Uncertainty Help Adversarial Robustness

Efficient Verified Machine Unlearning For Distillation

Built with on top of