The field of machine learning is moving towards developing more robust and private methods for data analysis and model training. A significant direction in this area is the integration of differential privacy (DP) into various machine learning frameworks, including federated learning and graph learning. Researchers are exploring innovative approaches to balance the trade-off between model utility and privacy protection. Notably, advancements in DP algorithms have led to improved convergence rates and better utility-privacy trade-offs. Adaptive clipping mechanisms and adversarial training methods are being proposed to enhance the privacy-utility balance in differentially private federated learning and graph learning. These developments have the potential to significantly impact the field of machine learning by enabling the training of accurate models while protecting sensitive information. Noteworthy papers include: Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization, which proposes a new method with less gradient noise variance and improves the upper bound of the DP estimator. Federated Learning with Differential Privacy: An Utility-Enhanced Approach, which presents a modification to vanilla differentially private algorithms based on a Haar wavelet transformation step and a novel noise injection scheme. AdvSGM: Differentially Private Graph Learning via Adversarial Skip-gram Model, which leverages adversarial training to privatize skip-gram while improving its utility.