The impact of technology in collaborative human-robot teams is both driven and limited by its performance and ease of use. As robots become increasingly common in society, exposure to and expectations for robots is ever-increasing. However, the means by which performance can be measured has not kept pace with the rapid evolution of human-robot interaction (HRI) technologies. The resulting situation is one in which people demand more from robots, but have relatively few mechanisms by which they can assess the market when making purchasing decisions, or integrating the systems already acquired. As such, robots specifically intended to interact with people are frequently met with enthusiasm, but ultimately fall short of expectations.
HRI research is focused on developing new and better theories, algorithms, and hardware specifically intended to push innovation. Yet determining whether these advances are, indeed, actually driving technology forward is a particular challenge. Few repeatability studies are ever performed, and the test methods and metrics used to demonstrate effectiveness and efficiency are often based on qualitative measures for which all external factors may not necessarily be accounted; or, worse, they may be based on measures that are specifically chosen to highlight the strengths of new approaches without also exposing the limitations. As such, despite the rapid progression of HRI technology in the research realm, advances in applied robotics lag behind. Without verification and validation, the gap between the cutting edge and the state of practice will continue to expand.
The necessity for validated test methods and metrics for HRI is driven by the desire for repeatable, consistent, and informative evaluations of HRI methodologies to demonstrably prove functionality. Such evaluations are critical for advancing the underlying models of HRI, and for providing guidance to developers and consumers of HRI technologies to meter expectations while promoting adoption.
This special issue of Transactions of Human-Robot Interaction, Test methods for human-robot teaming performance evaluations, is specifically intended to highlight the test methods, metrics, artifacts, and measurement systems designed to assess and assure HRI performance in human-robot teams. A broad spectrum of application domains encompass the topic of HRI teaming, and special attention is being paid to those test methods that are broadly applicable across multiple domains. These domains include medical, field, service, personal care, and manufacturing applications. This special issue will focus on highlighting the metrics used for addressing HRI metrology, and identifying the underlying issues of traceability, objective repeatability and reproducibility, benchmarking, and transparency in HRI.
List of Topics
For this special issue, topics of interest include but are not limited to:
Important Dates
Submission Website
https://mc.manuscriptcentral.com/thri
Contact Information
Inquiries regarding this special issue.
jeremy.marvel [at] nist.gov (Jeremy Marvel)