The recent developments in the field of language model post-training and fine-tuning have seen a significant shift towards more efficient and transparent methodologies. Researchers are increasingly focusing on open-source solutions that provide comprehensive guides and tools for post-training, aiming to democratize access to advanced techniques previously locked within proprietary systems. This trend is exemplified by the introduction of fully-open state-of-the-art post-trained models, which not only outperform their closed-source counterparts but also offer detailed training recipes and evaluation schemes. Additionally, there is a growing emphasis on parameter-efficient fine-tuning (PEFT) methods, which reduce computational costs while maintaining or even enhancing model performance. These methods, such as Low-Rank Adaptation (LoRA), are being systematically studied to understand their impact on various aspects of model behavior, including task generalization and memorization. Security concerns are also being addressed with the development of frameworks to detect backdoor attacks in PEFT-based adapters, ensuring the integrity of models shared in open-source platforms. Overall, the field is moving towards more efficient, transparent, and secure practices in adapting and fine-tuning large language models.