The field of machine learning is moving towards a greater emphasis on calibration and fairness in predictions. Researchers are exploring new methods to improve the accuracy and reliability of predictions, particularly in high-stakes applications. One key area of focus is the development of more flexible and expressive calibration techniques, such as unconstrained monotonic neural networks, which can learn arbitrary monotonic functions and improve the effectiveness of calibration in complex scenarios. Another important direction is the study of multiaccuracy and multicalibration, which are multigroup fairness notions for prediction that have found numerous applications in learning and computational complexity. The addition of global calibration to multiaccuracy has been shown to substantially boost its power, enabling the recovery of implications that were previously known only assuming the stronger notion of multicalibration. Furthermore, researchers are investigating the connections between different problems in computational learning theory and property testing, such as agnostically learning conjunctions and tolerantly testing juntas, and developing improved algorithms for these problems. Notable papers in this area include the proposal of an Unconstrained Monotonic Neural Network for calibration, which significantly relaxes the constraints on the calibrator and improves its flexibility and expressiveness. Additionally, the study of calibrated multiaccuracy has shed new light on the complementary roles played by multiaccuracy and calibration in achieving stronger notions of fairness and accuracy.