Large Language Models in Recommender Systems

The field of recommender systems is experiencing a significant shift with the increasing application of large language models (LLMs). Recent research has focused on fine-tuning LLMs for recommendation tasks, addressing the fundamental gap between the mechanisms of LLMs and the requirements of recommender systems. Innovations in loss functions, such as the proposal of novel loss functions tailored for recommendation, are improving the alignment of LLMs with recommendation objectives. Additionally, the use of in-context learning methods and the integration of structural information from knowledge graphs are enhancing the performance of LLM-based recommender systems. Theoretical analyses and practical designs for optimizing the generation of in-context learning content and the selection of negative examples for unlearning are also advancing the field. Noteworthy papers include one that proposes a Masked Softmax Loss to address the limitations of traditional language modeling loss, and another that introduces a framework to explore the relationship between preference alignment and LLM unlearning, leveraging bi-level optimization to efficiently select and unlearn examples for optimal performance.

Sources

MSL: Not All Tokens Are What You Need for Tuning LLM as a Recommender

Decoding Recommendation Behaviors of In-Context Learning LLMs Through Gradient Descent

Can LLM-Driven Hard Negative Sampling Empower Collaborative Filtering? Findings and Potentials

DiffusionCom: Structure-Aware Multimodal Diffusion Model for Multimodal Knowledge Graph Completion

Bridging the Gap Between Preference Alignment and Machine Unlearning

Built with on top of