The recent advancements in music information processing have seen a significant shift towards leveraging neural network models, particularly Transformers and LSTMs, for tasks such as audio-to-score conversion, multitrack sheet music generation, and chord progression accompaniment. These models are being tailored to better understand and generate musical sequences by incorporating domain-specific knowledge, such as musical symmetries and custom tokenization methods. Notably, the integration of group theory principles into transformer architectures has shown promising results in enhancing the accuracy and efficiency of music generation models. Additionally, there is a growing focus on the application of neural networks in predicting musical events, where they have demonstrated superior performance compared to traditional statistical models. This trend underscores the potential of neural networks to revolutionize music cognition and neuroscience by providing more accurate and nuanced models of musical behavior. Furthermore, the development of custom notation systems and tokenizers is streamlining the process of converting complex musical information into machine-readable formats, thereby facilitating more sophisticated music analysis and generation. Overall, the field is moving towards more sophisticated, theory-driven models that not only improve computational efficiency but also enhance the musicality and interpretability of generated content.