The field of natural language processing is witnessing a significant shift towards personalization and adaptation in large language models (LLMs). Researchers are exploring innovative approaches to tailor LLMs to individual users' preferences, needs, and contexts. One notable direction is the development of probabilistic graphical models that can account for flexible pronominal reference and empower computational systems to be both flexible and respectful of diverse user groups. Another area of focus is the use of collaborative filtering and retrieval-augmented generation to enhance personalized text generation. Additionally, there is a growing interest in pretraining LLMs for diachronic linguistic change discovery, which enables the detection of diverse phenomena such as lexical change, grammatical change, and word sense introduction/obsolescence. However, this increased personalization also introduces new vulnerabilities, such as adversarial ranking manipulations, which can be addressed through the development of robust optimization frameworks. Noteworthy papers in this area include: A Bayesian account of pronoun and neopronoun acquisition, which presents a probabilistic graphical modeling approach to account for flexible pronominal reference. Pretraining Language Models for Diachronic Linguistic Change Discovery, which shows that efficient pretraining techniques can produce useful models for historical linguistics and literary studies. Retrieval Augmented Generation with Collaborative Filtering for Personalized Text Generation, which proposes a method called CFRAG that adapts Collaborative Filtering to RAG for personalized text generation.