The field of artificial intelligence is moving towards a more nuanced understanding of the strengths and limitations of different approaches to multiclass classification and model merging. Recent research has highlighted the potential of embeddings-based approaches to outperform large language models (LLMs) in certain tasks, particularly when proprietary datasets are available. Additionally, innovative methods for exact unlearning and model merging have been proposed, which address key limitations of existing approaches and demonstrate significant improvements in accuracy and efficiency. These advancements have important implications for the development of more effective and efficient predictive models. Noteworthy papers include:
- Beyond the Hype: Embeddings vs. Prompting for Multiclass Classification Tasks, which demonstrates the superiority of embeddings-based approaches over LLMs in certain tasks.
- Exact Unlearning of Finetuning Data via Model Merging at Scale, which proposes a novel method for exact unlearning via model merging.
- MASS: MoErging through Adaptive Subspace Selection, which presents a new approach to model merging that achieves state-of-the-art performance across tasks.