The recent advancements in federated learning (FL) have significantly focused on enhancing privacy, efficiency, and robustness against various adversarial scenarios. Innovations in homomorphic encryption and quantization techniques have been pivotal in securing model updates during training, thereby mitigating privacy risks associated with inference attacks. Notably, novel algorithms combining low-bit quantization with pruning have demonstrated substantial reductions in computational costs while maintaining model accuracy. Additionally, the integration of adaptive optimizers with gradient compression methods has shown promise in reducing communication overhead, with theoretical analyses providing logarithmic dependence on model parameters rather than linear, which is crucial for deep learning models. Furthermore, the field has seen advancements in addressing noisy labels through client pruning, which identifies and excludes clients with potentially noisy data, thereby improving overall model performance. Privacy-preserving inference services have also made strides, ensuring both data privacy and inference verifiability through advanced cryptographic techniques. These developments collectively underscore a shift towards more secure, efficient, and reliable FL frameworks, catering to the growing demands of decentralized and privacy-conscious machine learning applications.