The field of recommender systems is moving towards more personalized and adaptive approaches, with a focus on addressing issues such as data sparsity, cold start problems, and user addiction. Recent research has explored the use of generative models, transfer learning, and reinforcement learning to improve the accuracy and diversity of recommendations. Notably, innovative tokenization techniques and universal item tokenization approaches have been proposed to enhance the representational capacity of recommender models. Additionally, there is a growing interest in developing recommender systems that prioritize user well-being and mitigate the risks of addiction. Noteworthy papers include:
- MTGRec, which proposes a multi-identifier item tokenization approach for generative recommender pre-training, and
- UTGRec, which introduces a universal item tokenization approach for transferable generative recommendation.
- TTA4SR, which explores test-time augmentation for sequential recommendation, and
- SEC, which proposes a novel imitation learning framework for user retention in large-scale recommender systems.