The recent advancements in multi-agent systems (MAS) research have significantly shifted towards enhancing adaptability, collaboration, and safety in dynamic environments. A notable trend is the development of automated responsibility assignment mechanisms for safety violations, which leverage counterfactual reasoning and the Shapley value to quantify agent contributions, thereby improving accountability and decision explainability. Another key area of innovation is the integration of legibility concepts into multi-agent reinforcement learning (MARL), enabling agents to reveal their intentions and optimize collaborative behaviors, which has shown efficiency gains in training time. Additionally, the use of MARL with communication protocols has been advanced to solve complex cooperative tasks such as navigation and collision avoidance among multiple agents, demonstrating improved coordination and resilience to external noise. Furthermore, the introduction of inverse attention mechanisms, inspired by the Theory of Mind, has enabled agents to adapt to diverse and changing environments by inferring and responding to the attention states of other agents, enhancing both cooperation and competition scenarios. Lastly, predicting future actions of reinforcement learning agents has become crucial for real-world deployments, with methods leveraging inner states and simulations proving effective for improving interaction and safety. Overall, these developments underscore a move towards more intelligent, adaptable, and safe multi-agent systems.