Abstract
Augmented reality (AR) has been applied to facilitate human–robot collaboration in manufacturing. It enhances real-time communication and interaction between humans and robots as a new paradigm of interface. This research conducts an experimental study to systematically evaluate and compare various input modality designs based on hand gestures, eye gaze, head movements, and voice in industrial robot programming. These modalities allow users to perform common robot planning tasks from a distance through an AR headset, including pointing, tracing, 1D rotation, 3D rotation, and switch state. Statistical analyses of both objective and subjective measures collected from the experiment reveal the relative effectiveness of each modality design in assisting individual tasks in terms of positional deviation, operational efficiency, and usability. A verification test on programming a robot to complete a pick-and-place procedure not only demonstrates the practicality of these modality designs but also confirms their cross-comparison results. Significant findings from the experimental study provide design guidelines for AR input modalities that assist in planning robot motions.