In-vehicle air gesture design: impacts of display modality and control orientation
Author(s)
Sterkenburg, Jason; Landry, Steven; FakhrHosseini, Shabnam; Jeon, Myounghoon
Download12193_2023_415_ReferencePDF.pdf (Embargoed until: 2024-09-14, 961.9Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Abstract
The number of visual distraction-caused crashes highlights a need for non-visual displays in the in-vehicle information system (IVIS). Audio-supported air gesture controls can tackle this problem. Twenty-four young drivers participated in our experiment using a driving simulator with six different gesture prototypes—3 modality types (visual-only, visual/auditory, and auditory-only) × 2 control orientation types (horizontal and vertical). Various data were obtained, including lane departures, eye glance behavior, secondary task performance, and driver workload. Results showed that the auditory-only displays showed a significantly lower lane departures and perceived workload. A tradeoff between eyes-on-road time and secondary task completion time for the auditory-only display was also observed, which means the safest, but slowest among the prototypes. Vertical controls (direct manipulation) showed significantly lower workload than horizontal controls (mouse metaphor), but did not differ in performance measures. Experimental results are discussed in the context of multiple resource theory and design guidelines for future implementation.
Date issued
2023-09-14Department
AgeLab (Massachusetts Institute of Technology)Publisher
Springer International Publishing
Citation
Sterkenburg, Jason, Landry, Steven, FakhrHosseini, Shabnam and Jeon, Myounghoon. 2023. "In-vehicle air gesture design: impacts of display modality and control orientation."
Version: Author's final manuscript