The recent advancements in the field of large language models (LLMs) are primarily focused on enhancing computational efficiency and reducing model size without compromising performance. A significant trend is the development of novel architectures and compression techniques tailored for resource-constrained environments, such as edge devices and mobile platforms. Memory layers at scale are being explored to augment model capacity without increasing computational complexity, demonstrating notable improvements in factual tasks. RNN-based models like RWKV are undergoing deep compression to fit into embedded systems, showcasing promising results with minimal accuracy loss. Fine-grained token-wise pruning approaches are emerging as state-of-the-art methods for reducing inference overhead, achieving superior performance compared to traditional pruning techniques. Additionally, innovative frameworks like SepLLM are leveraging special tokens to compress segments and accelerate inference, maintaining robust language modeling capabilities. Activation sparsity is being investigated as a complementary method to existing compression techniques, offering substantial memory and computational savings with negligible accuracy degradation. These developments collectively aim to democratize access to powerful LLMs by making them more deployable in diverse settings, from edge devices to streaming applications.