The field of information retrieval is witnessing a significant shift towards neural-based methods, particularly with the emergence of generative retrieval and dense passage retrieval. These approaches have shown promising results in improving retrieval performance, especially in conversational search and open-domain question answering systems. The use of large language models and dual-encoders has enabled the creation of contextual embeddings that can be indexed and clustered efficiently, leading to improved retrieval accuracy. Furthermore, the investigation of training and inference scaling laws has provided valuable insights into the mechanisms that underpin the performance and scalability of generative retrieval. Noteworthy papers in this area include:
- A paper proposing an end-to-end conversational search system that incorporates dense retrieval and query reformulation strategies, which outperforms traditional methods even without extensive fine-tuning.
- A paper introducing a universal generative retrieval framework that supports diverse tasks across multiple modalities and domains, achieving competitive performance with embedding-based methods while preserving efficiency.