The field of causal inference in machine learning is witnessing significant advancements, particularly in addressing the challenges of unobserved confounding, generalizability, and aleatoric uncertainty. Innovations are being made in developing models that can flexibly learn data-generating processes while directly inferring causal quantities, such as the marginal causal effect. These models are not only enhancing the accuracy of causal estimates but also enabling the creation of synthetic datasets that closely mimic real-world complexities, thereby facilitating more robust validation and inference. Additionally, there is a growing focus on leveraging large language models to impute unobserved variables, which is crucial for estimating causal effects in observational data. The development of systematic frameworks for evaluating model generalizability under covariate shifts is also a notable trend, providing more realistic insights into model performance beyond synthetic datasets. Furthermore, the quantification of aleatoric uncertainty in treatment effects is emerging as a critical area, with novel learners being proposed to address the randomness inherent in causal quantities. These developments collectively push the boundaries of causal inference, making it more applicable and reliable in diverse real-world scenarios.