Advancing Efficiency and Fairness in State Space Models

The recent advancements in the research area predominantly revolve around the integration of state space models (SSMs) with various domains, particularly in addressing computational inefficiencies and enhancing performance in tasks such as image deblurring, biodiversity analysis, dynamic graph embedding, and text reranking. SSMs, exemplified by architectures like Mamba, are being leveraged to replace traditional transformer-based models due to their linear complexity and ability to handle long-context data more efficiently. This shift is evident in applications ranging from visual data processing to temporal graph modeling, where SSMs are shown to achieve comparable or superior performance while reducing computational overhead. Additionally, there is a growing focus on fairness in machine learning models, particularly in graph neural networks and transformers, where novel frameworks are being developed to mitigate biases without reliance on sensitive attributes. These developments indicate a trend towards more efficient, fair, and scalable solutions across diverse fields, with notable innovations in model architecture and learning paradigms.

Sources

Towards Fair Graph Neural Networks via Graph Counterfactual without Sensitive Attributes

XYScanNet: An Interpretable State Space Model for Perceptual Image Deblurring

FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning

BarcodeMamba: State Space Models for Biodiversity Analysis

A Comparative Study on Dynamic Graph Embedding based on Mamba and Transformers

GG-SSMs: Graph-Generating State Space Models

Practicable Black-box Evasion Attacks on Link Prediction in Dynamic Graphs -- A Graph Sequential Embedding Method

Graph-Driven Models for Gas Mixture Identification and Concentration Estimation on Heterogeneous Sensor Array Signals

State Space Models are Strong Text Rerankers

Efficient Self-Supervised Video Hashing with Selective State Spaces

Built with on top of