The field of large language models is moving towards more efficient fine-tuning methods, with a focus on reducing computational costs and parameter sizes. Recent developments have introduced novel approaches to parameter-efficient transfer learning, such as integrating shared and layer-specific information, utilizing low-rank symmetric weight matrices, and leveraging Fisher information to select critical parameters. These innovations have led to significant improvements in performance while maintaining superior parameter efficiency. Noteworthy papers include:
- Optimizing Specific and Shared Parameters for Efficient Parameter Tuning, which proposes a novel PETL method that effectively mitigates distributional shifts during fine-tuning.
- FISH-Tuning, which incorporates FISH Mask into addition-based and reparameterization-based PEFT methods to achieve superior performance without additional memory overhead or inference latency.
- AROMA, which introduces a dual-loop architecture for rank growth and significantly reduces parameters compared to LoRA and AdaLoRA while achieving superior performance on natural language understanding and commonsense reasoning tasks.