The field of parameter-efficient fine-tuning is rapidly advancing, with a focus on developing methods that can efficiently adapt large pre-trained models to downstream tasks. Recent work has explored innovative approaches to reduce the computational cost and memory requirements of fine-tuning, while maintaining strong performance. Notable trends include the development of low-rank adaptation methods, such as LoRA and its variants, which can significantly reduce the number of trainable parameters. Another area of research is the use of graph filters and subspace views to tune attention-based large models, which can effectively expand the feature space and enhance the capacity of transformers. Additionally, there is a growing interest in reparameterization-based methods, such as Monarch Sparse Tuning, which can capture local geometric features in 3D point clouds and achieve state-of-the-art results. Overall, the field is moving towards more efficient, robust, and scalable fine-tuning methods that can be applied to a wide range of tasks and domains. Noteworthy papers include TRACE, which introduces a novel fine-tuning method for time series foundation models, and Serial LoRA, which proposes a shared low-rank matrix serially composite with the attention mechanism. Decoupling Angles and Strength in Low-rank Adaptation is also notable, as it presents a novel finetuning method that normalizes and scales learnable low-rank matrices, enhancing robustness without compromising performance.