The field of machine learning is moving towards the development of more efficient and scalable methods for training large language models. Recent research has focused on improving the performance of distributed training frameworks, enabling the use of heterogeneous computing resources, and optimizing parallelism strategies. This has led to significant advancements in the training of large-scale models, including the ability to train models on a single node, as well as the development of novel parallelism techniques. Notable papers in this area include: NNTile, which presents a machine learning framework for training large deep neural networks in heterogeneous clusters, and MoE Parallel Folding, which introduces a novel strategy for efficient large-scale MoE model training with hybrid parallelism. Sailor is also noteworthy, as it automates distributed training over dynamic, heterogeneous, and geo-distributed clusters, optimizing training throughput and cost.