The recent advancements in the field of data synthesis and table understanding are significantly reshaping how we handle and interpret structured data. A notable trend is the integration of large language models (LLMs) with tabular data, enabling more sophisticated and dynamic data processing capabilities. This integration is not only enhancing the fidelity and adaptability of synthetic data but also paving the way for more robust and privacy-aware data sharing practices. Specifically, the development of multi-table synthesizers and novel evaluation metrics is addressing the critical issue of data privacy in collaborative environments. Additionally, the concept of in-context databases, enabled by LLMs, is emerging as a promising alternative to traditional databases, particularly in scenarios requiring dynamic updates and lightweight data handling. The field is also witnessing innovations in table representation learning, where synthetic data generation is being leveraged to improve table management and recommendation systems. Furthermore, the application of contrastive learning techniques to table understanding is enhancing the comprehension and analysis of tabular data, marking a significant step forward in this domain. Overall, these developments are collectively pushing the boundaries of what is possible with structured data, fostering a more efficient and data-driven society.