The field of large language models is moving towards more efficient and compressed models, with a focus on improving performance while reducing resource requirements. Recent developments have shown that importance-aware delta sparsification, saliency-aware partial retraining, and rate-constrained optimized training can significantly improve model compression and quantization. Additionally, research has demonstrated that binary and ternary quantization can improve feature discrimination, and that larger models tend to generalize better due to decreased loss variance and quantization error. Noteworthy papers in this area include ImPart, which achieves state-of-the-art delta sparsification performance, and Backslash, which enables flexible trade-offs between model accuracy and complexity. Other notable papers include Enhancing Ultra-Low-Bit Quantization of Large Language Models and Precision Neural Network Quantization via Learnable Adaptive Modules, which propose novel approaches to ultra-low-bit quantization and neural network quantization, respectively.