The field of reinforcement learning is moving towards a stronger emphasis on safety, with researchers exploring various approaches to ensure that agents can operate within predetermined constraints without compromising performance. This shift is driven by the need to deploy reinforcement learning in real-world environments where safety violations can have severe consequences. Recent developments have focused on designing novel algorithms and frameworks that can handle uncertain environments, multiple constraints, and complex tasks. Notable advances include the use of cost-modulated rewards, stochastic thresholds, and certified training methods to provide safety guarantees during policy training and deployment. Noteworthy papers include: Safety Modulation: Enhancing Safety in Reinforcement Learning through Cost-Modulated Rewards, which proposes a novel safe RL approach called Safety Modulated Policy Optimization. SPoRt -- Safe Policy Ratio: Certified Training and Deployment of Task Policies in Model-Free RL, which presents a data-driven approach for obtaining safety guarantees for a new task-specific policy in a model-free setup. Ensuring Safety in an Uncertain Environment: Constrained MDPs via Stochastic Thresholds, which introduces a novel model-based primal-dual algorithm for multiple constraints against stochastic thresholds. These papers demonstrate significant innovations in safety-centric reinforcement learning, advancing the field towards more reliable and trustworthy applications.