The field of neural networks and reinforcement learning is moving towards a deeper understanding of the underlying dynamics and mechanisms. Recent research has focused on the intersection of Bayesian statistics and neural networks, with a particular emphasis on stochastic gradient descent and its relationship to Bayesian sampling. Additionally, there has been a push towards developing more efficient and robust reinforcement learning algorithms, including those that can handle real-time constraints and uncertainty. Notable papers in this area include the work on Almost Bayesian, which shows that stochastic gradient descent can be viewed as a modified Bayesian sampler. The paper on Harnessing uncertainty when learning through Equilibrium Propagation is also noteworthy, as it demonstrates the ability of Equilibrium Propagation to learn in the presence of uncertainty and achieve improved model convergence and performance. The work on Noise-based reward-modulated learning presents a novel noise-based learning rule that enables efficient, gradient-free learning in reinforcement learning.