Advancements in Intuitive Human-Robot Interaction and Assistive Robotics

The field of human-robot interaction and assistive robotics is rapidly advancing towards more intuitive, inclusive, and efficient systems. Recent developments focus on enhancing gesture recognition, eye-gaze interfaces, and shared autonomy to improve user experience and task performance. Innovations include overcoming environmental limitations in gesture recognition, leveraging large language models for advanced gesture interpretation, and employing vision-only frameworks for zero-shot user intent recognition. These advancements aim to make robotic systems more accessible and user-friendly, particularly for individuals with motor impairments.

Noteworthy papers include:

  • A study on overcoming light condition limitations in gesture recognition using night vision cameras and machine learning algorithms.
  • GazeGrasp, which introduces a wearable eye-gaze interface for robotic manipulation, significantly improving task efficiency.
  • GestLLM, which utilizes large language models for interpreting a wide range of hand gestures, enhancing human-robot interaction.
  • Research on the sense of agency in assistive robotics, highlighting the trade-off between task performance and user control.
  • The VOSA framework, enabling zero-shot user intent recognition in shared autonomy, reducing human effort and time.
  • LAMS, a novel approach using large language models for automatic mode switching in assistive teleoperation, improving performance over time.

Sources

Extraction Of Cumulative Blobs From Dynamic Gestures

GazeGrasp: DNN-Driven Robotic Grasping with Wearable Eye-Gaze Interface

GestLLM: Advanced Hand Gesture Interpretation via Large Language Models for Human-Robot Interaction

The Sense of Agency in Assistive Robotics Using Shared Autonomy

Toward Zero-Shot User Intent Recognition in Shared Autonomy

LAMS: LLM-Driven Automatic Mode Switching for Assistive Teleoperation

Built with on top of