The research area is witnessing a significant shift towards enhancing interpretability and safety in decision-making models, particularly in motion planning and reinforcement learning (RL). A notable trend is the integration of constraint learning into existing frameworks, which not only improves model performance but also provides clearer justifications for decisions. This is exemplified by methods that leverage vectorized scene embeddings to extract driving constraints from expert trajectories, enhancing both interpretability and generalization across diverse scenarios. In the realm of RL, there is a strong focus on safe offline learning, where models must optimize cumulative rewards while strictly adhering to safety constraints using only offline data. Novel approaches are being developed to balance these constraints effectively, often through the use of latent safety models and feasibility-informed optimization techniques. These methods aim to ensure persistent safety, even in out-of-distribution states, by incorporating cost-advantage terms and leveraging textual constraints for more flexible and accessible representation. Overall, the field is progressing towards more transparent, safe, and adaptable models that can operate in complex, real-world scenarios.