Optimizing LLMs and Serverless Computing: New Approaches

Current Trends in Large Language Model Optimization and Serverless Computing

Recent advancements in the field of large language models (LLMs) and serverless computing have introduced innovative approaches to enhance efficiency, scalability, and cost-effectiveness. In the realm of LLMs, there is a notable shift towards optimizing inference through the adoption of Mixture-of-Experts (MoE) architectures. These models, which dynamically activate specialized subnetworks, are being refactored to better align with system-level optimizations, addressing issues such as memory management and batching efficiency. This trend is exemplified by frameworks that transform pre-trained dense models into smaller, more efficient MoE variants, thereby circumventing the high costs associated with training from scratch.

In parallel, serverless computing is witnessing a surge in solutions aimed at improving resource provisioning and model inference on edge devices. Predictive frameworks are being developed to dynamically allocate resources based on workload patterns, ensuring that service-level objectives are met while minimizing operational costs. Additionally, novel techniques for efficient model swapping and fusion are being explored to enhance the performance of on-demand inference services on resource-constrained edge devices.

The integration of proactive caching strategies in MoE-based LLM serving is also gaining traction, offering significant speedups by anticipating and preparing for future parameter usage, thereby reducing latency and improving overall system performance.

Noteworthy Developments

  • Read-ME: Introduces a system-friendly pre-gating router decoupled from the MoE backbone, significantly enhancing expert-aware batching and caching.
  • SLOPE: Utilizes neural network models to predict serverless resource requirements, reducing operating costs by up to 66.25%.
  • FusedInf: Combines multiple DNN models into a single Direct Acyclic Graph (DAG) for faster execution and reduced memory usage on edge devices.
  • ProMoE: Proposes a proactive caching system for MoE-based LLM serving, achieving significant speedups in both prefill and decode stages.

Sources

Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design

Prediction-driven resource provisioning for serverless container runtimes

FusedInf: Efficient Swapping of DNN Models for On-Demand Serverless Inference Services on the Edge

Histrio: a Serverless Actor System

ProMoE: Fast MoE-based LLM Serving using Proactive Caching

Built with on top of