Explainable AI Decision-Making in Human-AI Groups A closed-loop machine teaching framework that uses explainable robot demonstrations and particle filters to model and adapt to individual and group beliefs, improving human understanding of robot decision-making in teams. Detecting Unexpected AI Behavior from Human Cues Developing multimodal datasets and benchmarking models, including large language models, to detect user responses to unexpected AI behavior. Participation detection and Robot Mediation for Inclusive Child Group Interactions Real-time sensing and robot strategies to support equitable participation among mixed-ability children in group interactions (WIP). Pedestrian Intent and Behavior Modeling for Autonomous Driving Creating probabilistic hybrid automata models to predict pedestrian intent and trajectories for safe AV planning Trustworthy Interaction Between Automated Vehicles and Pedestrians Understanding and improving pedestrian trust in automated vehicles through behavioral studies and predictive models for safe, interpretable AV-pedestrian interactions. Trustworthy interaction between autonomous vehicles and drivers Developed real-time trust estimation and calibration frameworks for autonomous vehicles, using behavioral signals and adaptive communication to prevent driver misuse and disuse. Team cooperation dynamics in mixed-motive teams Studying factors that shape cooperation in mixed-motive human-AI teams through interactive online games.