Gaze interaction is particularly well suited to rapid, coarse, absolute pointing, but lacks natural and expressive mechanisms to support modal actions. Conversely, free space hand gesturing is slow and imprecise for pointing, but has unparalleled strength in gesturing, which can be used to trigger a wide variety of interactive functions. Thus, these two modalities are highly complementary. By fusing gaze and gesture into a unified and fluid interaction modality, we can enable rapid, precise and expressive free-space interactions that mirror natural use. Moreover, although both approaches are independently poor for pointing tasks, combining them can achieve pointing performance superior to either method alone. This opens new interaction opportunities for gaze and gesture systems alike.
There are two main contributions of this research. Foremost, we present a series of gaze+gesture interaction techniques, contextualized in three example application scenarios. To help explore the interaction design space of gaze and free-space gesture, our development efforts were guided by an interaction taxonomy we extended from the literature. Secondly, we present a user study that rigorously compares gaze+gesture to five contemporary approaches. Such an apples-to-apples comparison is vital, as the present literature employs a variety of study designs that generally preclude direct comparison. The results of this study suggest our gaze+gesture approach has a similar index of performance to “gold standard” input methods, such as mouse, and can target small elements that are generally inaccessible to gaze-only or gesture-only systems.
Chatterjee, I., Xiao, R. and Harrison, C. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 17th ACM International Conference on Multimodal Interaction (Seattle, Washington, November 9 – 13, 2015). ICMI ’15. ACM, New York, NY. 131-138.
Best Student Paper Award