Advances in Adversarial Robustness and Data Security

The field of artificial intelligence and machine learning is moving towards a greater emphasis on security and robustness, with a particular focus on defending against adversarial attacks and data poisoning. Researchers are exploring new methods to enhance the reliability and integrity of deep learning models, including the development of innovative defense strategies and the analysis of emerging threats such as non-control-data attacks. Additionally, there is a growing recognition of the importance of data quality and the need to mitigate the negative impacts of data duplication and zero-inflated distributions on model performance. Noteworthy papers in this area include the survey on data poisoning in deep learning, which provides a comprehensive review of the field and identifies key open challenges, and the paper on alleviating performance disparity in adversarial spatiotemporal graph learning, which proposes a novel framework for enhancing minority class gradients and representations. The paper on the impact of data duplication on deep neural network-based image classifiers also provides valuable insights into the effects of data quality on model generalization and performance.

Sources

Data Poisoning in Deep Learning: A Survey

Non-control-Data Attacks and Defenses: A review

Alleviating Performance Disparity in Adversarial Spatiotemporal Graph Learning Under Zero-Inflated Distribution

Impact of Data Duplication on Deep Neural Network-Based Image Classifiers: Robust vs. Standard Models

Built with on top of