Intelligent Systems: Adaptation and Personalization in Human-Robot Interaction

The current developments in the research area of human-robot interaction and activity recognition are significantly advancing through innovative frameworks and methodologies. A notable trend is the integration of large language models (LLMs) and foundation models (FMs) to enhance the adaptability and personalization of robot behaviors and activity recognition systems. These models are being leveraged to infer user preferences and environmental constraints, enabling robots to perform tasks more effectively in diverse and constrained environments. Additionally, cross-modal and contrastive learning techniques are being employed to bridge the gap between different data modalities, such as radio-frequency (RF) and visual data, improving the accuracy and generalizability of human activity recognition systems. The field is also witnessing advancements in transfer learning and zero-shot learning, allowing models to adapt to new environments and tasks without the need for extensive retraining. These developments are not only enhancing the performance of existing systems but also addressing critical challenges such as privacy concerns and the scarcity of labeled data. Overall, the research is moving towards more intelligent, adaptable, and user-centric systems that can operate effectively in complex, real-world scenarios.

Noteworthy papers include one that introduces a novel cross-modal framework for RF-based human activity recognition, effectively leveraging vision-based foundation models for enhanced performance. Another paper stands out for its innovative approach to merging LLM-based Bayesian active preference learning with constraint-aware task planning, significantly improving preference satisfaction and plan feasibility in real-world scenarios.

Sources

Robot Behavior Personalization from Sparse User Feedback

APRICOT: Active Preference Learning and Constraint-Aware Task Planning with LLMs

Large Model for Small Data: Foundation Model for Cross-Modal RF Human Activity Recognition

SANSee: A Physical-layer Semantic-aware Networking Framework for Distributed Wireless Sensing

Contrastive Learning with Auxiliary User Detection for Identifying Activities

Cross-Domain Transfer Learning Method for Thermal Adaptive Behavior Recognition with WiFi

A Comparison of Prompt Engineering Techniques for Task Planning and Execution in Service Robotics

Approaches to human activity recognition via passive radar

Built with on top of