The field of computer vision is moving towards improving the robustness of deep learning models. Recent studies have focused on disentangling spatial and channel mixing operations, leading to discoveries that random, fixed spatial mixing can achieve similar performance to learned mixing, while also providing increased robustness to adversarial perturbations. Additionally, there is a growing emphasis on evaluating and improving model robustness beyond accuracy, with research exploring metrics such as fairness, calibration, and worst-class certified robustness. Noteworthy papers include: Beyond Accuracy, which introduces the QUBA score for evaluating model quality beyond accuracy. EasyRobust, which provides a comprehensive toolkit for training and evaluating robust vision models. Feature Statistics with Uncertainty, which proposes a robustness enhancement module that reconstructs attacked examples and calibrates shifted distributions. Principal Eigenvalue Regularization, which optimizes the largest eigenvalue of the smoothed confusion matrix to enhance worst-class accuracy. k-NN as a Simple and Effective Estimator of Transferability, which finds that a simple k-nearest neighbor evaluation surpasses existing transferability metrics. Stop Walking in Circles, which introduces a method for early termination of Projected Gradient Descent, substantially speeding up robustness evaluation. Efficient Verified Machine Unlearning For Distillation, which proposes a novel framework for verified machine unlearning in teacher-student knowledge distillation settings.