The recent advancements in the research area are significantly pushing the boundaries of both security and efficiency in various applications, particularly in the context of large language models (LLMs) and edge computing. A notable trend is the development of innovative frameworks that protect the foundational capabilities of LLMs deployed on edge devices from security threats such as model stealing, while maintaining computational efficiency. These frameworks leverage trusted execution environments (TEEs) to ensure secure communication and computation, addressing the limitations of traditional task-specific protection mechanisms. Additionally, there is a growing focus on the integration of pre-trained language models for trajectory recovery, which is crucial for applications in spatiotemporal data analysis, addressing the challenges of sparse data and varying sampling intervals. Another significant development is the application of single-layer transformers for efficient trajectory similarity calculations, overcoming the limitations of traditional methods in terms of both effectiveness and efficiency. Furthermore, the introduction of benchmarks for embodied task planning using LLMs highlights the need for models that can understand complex spatial, temporal, and causal relationships, pushing the research towards more sophisticated AI applications. Lastly, the optimization of secure machine learning using GPU TEEs demonstrates substantial improvements in performance by reducing communication overheads, which is critical for latency-sensitive applications in cloud-based ML. Overall, the field is moving towards more secure, efficient, and versatile AI systems that can handle complex tasks in dynamic environments.