The field of Graphical User Interface (GUI) agents and automation is rapidly advancing, with a focus on developing more intelligent, adaptable, and deployable systems. Recent developments have seen the integration of large language models (LLMs) and multimodal learning, enabling GUI agents to better understand and interact with complex interfaces. Notable advancements include the development of visual world models, such as ViMo, which generate future GUI observations as images, and the introduction of frameworks like LearnAct, which enhance mobile GUI agent capabilities through human demonstrations. Additionally, researchers are exploring new approaches to GUI automation, including the use of process rewards and intent-based affordances, to improve the efficiency and effectiveness of GUI agents. Overall, the field is moving towards more sophisticated and autonomous GUI agents that can seamlessly interact with humans and perform complex tasks. Noteworthy papers include LearnAct, which proposes a unified demonstration benchmark for mobile GUI agents, and InfiGUI-R1, which introduces a reasoning-centric framework for advancing GUI agents from reactive actors to deliberative reasoners. UFO2 is also notable, as it presents a multiagent AgentOS for Windows desktops that enables practical, system-level automation.