The field of Text-to-SQL is moving towards more efficient and effective systems, with a focus on reducing computational costs and improving performance. Researchers are exploring innovative approaches, such as complexity-aware routing, reinforcement learning with tailored partial rewards, and distilled customization. These methods aim to assign queries to suitable SQL generation pipelines, develop intrinsic reasoning skills, and produce high-quality synthetic data for fine-tuning smaller models. As a result, the field is seeing significant improvements in accuracy, generalization, and cost-efficiency. Notable papers in this area include: EllieSQL, which proposes a complexity-aware routing framework that reduces token use by over 40% without compromising performance. Reasoning-SQL, which introduces a novel set of partial rewards tailored for Text-to-SQL and achieves higher accuracy and superior generalization compared to supervised fine-tuning. Distill-C, which utilizes large teacher LLMs to produce high-quality synthetic data and enables smaller models to rival or outperform larger ones. MageSQL, which explores in-context learning over LLMs and introduces a graph-based demonstration selection method and an error correction module. LearNAT, which proposes a framework that improves the performance of open-source LLMs on complex NL2SQL tasks through task decomposition and reinforcement learning.