Game Development and Human-Robot Interaction

Report on Current Developments in Game Development and Human-Robot Interaction

General Direction of the Field

The recent advancements in the intersection of game development and human-robot interaction (HRT) are significantly reshaping the landscape of both fields. A notable trend is the democratization of game development through the integration of Large Language Models (LLMs) and natural language processing, enabling non-technical users to create games using intuitive, conversational interfaces. This shift is facilitated by innovative frameworks that translate human instructions into executable game scripts and code, thereby lowering the barrier to entry for game creation.

In the realm of HRT, there is a growing emphasis on trust dynamics within ad hoc human-robot teams, particularly in emergency and high-stakes scenarios. Research is focusing on how to establish and maintain 'swift trust' among team members, including both humans and autonomous systems, to ensure effective collaboration. This involves studying trust violations, their impacts, and mechanisms for trust repair, which are crucial for the successful deployment of autonomous systems in real-world applications.

Another significant development is the application of LLMs in strategic decision-making and skill acquisition for autonomous game players. These models are being refined to learn and adapt to complex, multi-agent environments through advanced bi-level tree search and self-play simulations, enhancing their strategic capabilities and performance in games.

Noteworthy Papers

  • Game Development as Human-LLM Interaction: Introduces an innovative Interaction-driven Game Engine (IGE) that leverages LLMs to enable natural language-based game development, marking a significant step towards democratizing game creation.
  • Graph Retrieval Augmented Trustworthiness Reasoning: Presents a novel framework for enhancing trustworthiness reasoning in multiplayer games using dynamic trustworthiness graphs, significantly improving decision-making transparency and accuracy.

Sources

Game Development as Human-LLM Interaction

Swift Trust in Mobile Ad Hoc Human-Robot Teams

Moonshine: Distilling Game Content Generators into Steerable Generative Models

Microscopic Analysis on LLM players via Social Deduction Game

Incorporating a 'ladder of trust' into dynamic Allocation of Function in Human-Autonomous Agent Collectives

Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search

VR Cloud Gaming UX: Exploring the Impact of Network Quality on Emotion, Presence, Game Experience and Cybersickness

Graph Retrieval Augmented Trustworthiness Reasoning

Do Mistakes Matter? Comparing Trust Responses of Different Age Groups to Errors Made by Physically Assistive Robots