The field of machine learning and coding theory is witnessing significant developments, driven by the need for efficient and scalable solutions. Recent research has focused on improving the training algorithms for large-scale machine learning models, with a particular emphasis on sparse models and distributed training approaches. Notably, innovative methods for training block-wise sparse models and hybrid parallelism strategies have been proposed, demonstrating substantial improvements in computation and memory costs. Furthermore, advancements in coding theory have led to the discovery of new MDS Euclidean self-dual codes and quasi-cyclic codes, which have important implications for data storage and transmission. Additionally, research on sumcheck protocols and matrix code equivalence problems has yielded more efficient algorithms and novel applications. Some noteworthy papers in this regard include: The paper proposing an efficient training algorithm for models with block-wise sparse matrices, which decreases computation and memory costs without performance drops. The paper presenting a comparative analysis of distributed training strategies for large-scale neural networks, which achieves a 3.2x speedup compared to single-device training. The paper constructing new MDS Euclidean self-dual codes via generalized Reed-Solomon codes and their extended codes, which exceeds 85% of possible MDS Euclidean self-dual codes.