Pedestrian trust in automated vehicles in virtual reality

Pedestrians’ acceptance of automated vehicles (AVs) depends on their trust in the AVs, which in turn depends on the communication between pedestrians and the AV. AVs can communicate their intent through explicit or implicit means. Traditional methods of explicit communication in human-driven vehicles (HDVs) include indicator lamps, brake lamps, and horns. Implicit rather than explicit communication is a less explored approach to tackling the communication challenge and promoting trust between pedestrians and AVs. Implicit vehicle communication refers to the behavior cues derived from the vehicle’s driving. Similarly, the pedestrians can implicitly commmunicate through their motion, gesture, gaze, etc.

Virtual reality setup for user study. The left side shows the user wearing the HTC Vive headset and walking on the omni-directional treadmill. The right side shows the virtual environment as seen by the participant.

We developed a model of pedestrians’ trust in AVs based on AV driving behavior and traffic signal presence. To empirically verify this model, we conducted a human–subject study with 30 participants in a virtual reality environment. The study manipulated two factors: AV driving behavior (defensive, normal, and aggressive) and the crosswalk type (signalized and unsignalized crossing). Results indicate that pedestrians’ trust in AVs was influenced by AV driving behavior as well as the presence of a signal light. In addition, the impact of the AV’s driving behavior on trust in the AV depended on the presence of a signal light. There were also strong correlations between trust in AVs and certain observable trusting behaviors such as pedestrian gaze at certain areas/objects, pedestrian distance to collision, and pedestrian jaywalking time. We also present implications for design and future research.

Virtual reality setup for user study. The left side shows the user wearing the HTC Vive headset and walking on the omni-directional treadmill. The right side shows the virtual environment as seen by the participant.