The recent developments in the field of language and speech processing highlight a significant shift towards addressing the challenges of low-resource and no-resource languages, as well as the complexities of multilingual and code-switching scenarios. Innovations are particularly focused on enhancing the performance of automatic speech recognition (ASR) and text-to-speech (TTS) systems for languages with limited digital resources. Techniques such as prompt-tuning, tokenization, and the use of large language models (LLMs) are being employed to improve accuracy and efficiency. Additionally, there is a growing emphasis on creating and utilizing large-scale, diverse datasets to train models that can handle the nuances of specific languages and dialects, including those that are underrepresented. The field is also seeing advancements in the development of specialized models for tasks such as diacritization, transliteration, and speaker recognition, which are crucial for preserving linguistic diversity and improving accessibility to technology for speakers of all languages.nnNoteworthy papers include:n- A study on Indonesian-English code-switching in TTS systems, which introduces a novel approach to language identification and achieves superior naturalness and intelligibility.n- Research on enhancing Whisper's performance for Indian languages through prompt-tuning and a novel tokenizer, demonstrating significant improvements in accuracy and speed.n- The introduction of YAD, a benchmark dataset for Yor`ub'a diacritization, showcasing the effectiveness of pre-trained T5 models.n- The development of HindiLLM, a large language model for Hindi, which outperforms existing models in various language processing tasks.n- A comparative evaluation of approaches for no-resource language translation, highlighting the potential of in-context learning with LLMs.n- The creation of VoxVietnam, a large-scale multi-genre dataset for Vietnamese speaker recognition, addressing the challenges of genre diversity.n- An end-to-end framework that augments Wav2Vec 2.0 for superior ASR in low-resource languages, showing significant improvements in error rates.n- The introduction of Fotheidil, the first web-based transcription system for the Irish language, utilizing semi-supervised learning for improved performance.n- A comparative analysis of rule-based and Seq2Seq approaches for Sinhala transliteration, with the Transformer-based method showing superior pattern recognition.n- Research on the importance of layer pruning for smaller BERT models in low-resource languages, demonstrating that pruned models can maintain high performance with reduced complexity.n- The standardization of the largest spoken Singlish corpus and the proposal of SingAudioLLM, a multi-task multimodal model, advancing the understanding of Singlish.n- A study on the robustness of cover version identification models using YouTube data, providing insights into the challenges of identifying cover songs online.