The field of federated learning and machine unlearning is rapidly evolving, with a focus on developing innovative methods to protect data privacy and ensure efficient model training. Recent research has explored the use of novel frameworks, such as PURGE, DELETE, and FedPaI, to achieve efficient machine unlearning and extreme sparsity in federated learning. These approaches have shown significant improvements in terms of accuracy, communication efficiency, and computational overhead. Additionally, the use of edge model overlays, selective pruning, and knowledge deletion techniques has been investigated to further enhance the performance and privacy of federated learning models. Noteworthy papers include the proposal of the PURGE framework for efficient verified machine unlearning, the introduction of the DELETE method for general unlearning in class-centric tasks, and the development of the FedPaI framework for achieving extreme sparsity in federated learning. These advancements have the potential to enable more efficient and private federated learning, and are expected to have a significant impact on the field in the coming years.