Explainable AI Decision-Making in Human-AI Groups A closed-loop machine teaching framework that uses explainable robot demonstrations and particle filters to model and adapt to individual and group beliefs, improving human understanding of robot decision-making in teams. Automatic Detection of Unexpected AI Behavior from Human Cues Evaluating the detection of unexpected AI behavior in autonomous vehicles by analyzing subtle human emotional cues Pedestrian Behavior Modeling Developing explainable models of long-term urban pedestrian behavior Trustworthy Interaction Between Automated Vehicles and Pedestrians Understanding and improving pedestrian trust in automated vehicles through behavioral studies and predictive models for safe, interpretable AV-pedestrian interactions. Trustworthy interaction between autonomous vehicles and drivers Developed real-time trust estimation and calibration frameworks for autonomous vehicles, using behavioral signals and adaptive communication to prevent driver misuse and disuse. Team cooperation dynamics in mixed-motive teams Studying factors that shape cooperation in mixed-motive human-AI teams through interactive online games.