UC DAVIS CHPS LAB
  • Home
  • Research
  • Publications
  • People
  • Prospective Students
  • Home
  • Research
  • Publications
  • People
  • Prospective Students

Funding

Our research is only possible with the funding provided by a broad range of grants and industry collaborations. We are deeply thankful to all the entities listed below.
Picture
Active Grants
  • "Integrated System and Predictive Model to Assess Air Quality and Rapidly Pinpoint Emerging Sources", supported by NASA, 01/24/2025-07/24/2025.
  • "NSF Center for Pandemic Insight (NSF CPI)" (more information here), support by NSF, 09/01/2024 - 08/31/2031
  • "NCS-FO: Understanding the computations the brain performs during choice", support by NSF, 09/01/2023 - 08/31/2026
  • "Metrics and Models for Real-Time Inference and Prediction of Trust in Human-autonomy Teaming", supported by AFOSR, 11/25/2022 - 11/24/2025
  • "Unmanned Aerial Vehicle Swarms for Large-Scale, Real-Time, and Intelligent Disaster Responses", supported by Sony Corporation, 06/17/2021 - 09/30/2025
  • "AI Institute: Next Generation Food Systems" (more information here), supported by USDA/NSF, 09/01/2020 - 08/31/2025

Current Projects

Causal Explainable AI
Artificial intelligence (AI) is poised to play a transformative role in solving complex control and decision-making problems across a wide range of domains, including healthcare, agriculture, transportation, social networking, and e-commerce. However, the successful adoption of AI-based systems critically depends on user trust. Among the key factors influencing trust, explainability has emerged as essential for enhancing user confidence, acceptance, and understanding of AI-generated decisions. Explainable AI (XAI) not only promotes trust but also reinforces fairness, accountability, and ethical decision-making. Despite these benefits, the opaque and often inscrutable nature of many AI models remains a significant barrier to their deployment—especially in safety-critical and human-facing applications where transparency is paramount. This project seeks to advance the theoretical and algorithmic foundations of causal explainable AI. Our approach is centered on a human-centric methodology, drawing on established insights into how people define, generate, select, present, and evaluate explanations. A primary focus of the project is the development of techniques for generating causal explanations, including counterfactual reasoning, to illuminate the rationale behind AI-driven decisions and actions. By bridging the gap between complex AI models and human interpretability, this research aims to foster trustworthy, transparent, and ethical AI solutions that are well-aligned with user needs and societal values. We are conducting this research in collaboration with Dr. Xin Liu (CS, UC Davis) and Dr. Xin Chen (Industrial and Systems Engineering, Georgia Tech). The project is currently supported by the USDA and the NSF.
Picture
Real-Time Inference and Prediction of Trust in Human-Autonomy Teaming


As autonomous systems become increasingly prevalent in both civilian and military applications, effective collaboration between humans and these systems is critical for safety and mission success. However, inappropriate levels of human trust—whether mistrust, distrust, overreliance, or skepticism—can undermine team performance and lead to adverse outcomes. A key factor in enabling high-performing human-autonomy teams (HAT) is the proper calibration of trust between human operators and autonomous systems. Trust in HAT is inherently dynamic, evolving in response to repeated human-autonomy interactions. To ensure seamless collaboration, it is essential to account for this dynamic nature of trust and to develop objective, unobtrusive methods for inferring and predicting trust in real time.
This project aims to address this challenge by developing embedded and physiological metrics, as well as predictive models, capable of assessing single-trial trust dynamics in complex HAT scenarios. The goal is to enable real-time inference and prediction of trust states, supporting adaptive system behaviors that foster effective human-autonomy teaming. We are conducting this research in collaboration with Dr. Allie Anderson and Dr. Torin Clark at the University of Colorado Boulder. The project has been supported by CITRIS through the grant "Trust Aware Human-Machine Teaming Using Real-Time Neurophysiological Data" (01/01/2023 – 12/31/2023) and by NASA under the grant "Habitats Optimized for Missions of Exploration (HOME)" (09/01/2019 – 08/31/2024). It is currently supported by the AFOSR.​
Understanding Computations Brain Performs During Choice
This project seeks to develop a causal framework for investigating the neural computations that underlie decision-making, leveraging a well-established neuroeconomic paradigm known as the Gambling Task. Decision-making is central to human behavior, and impairments in this process—often due to cognitive dysfunction—have profound consequences for individuals, caregivers, and society at large. Such impairments are projected to contribute to more than $2 trillion in healthcare costs in the United States alone by 2030. A key challenge in this area is the lack of effective methods to characterize how ongoing neural dynamics across different brain regions encode decision-making processes. This gap limits our ability to understand the computations performed by the brain during decision-making.
To address this challenge, the project integrates advanced feature engineering and machine learning techniques with engineering-based system identification and control theory. This interdisciplinary approach aims to develop a robust framework capable of inferring and modeling the neural computations that drive decision-making behavior. The project is a collaborative effort involving Dr. Karen Moxon (BME, UC Davis), Dr. Xin Liu (CS, UC Davis), Dr. Jochen Ditterich (Center for Neuroscience, UC Davis), and Dr. Ignacio Saez (Mount Sinai). It has been supported by the NSF through the grant "NCS-FO: Identification and Control of Neural Cognitive Systems" (10/01/2020 – 09/30/2024) and is currently funded by an additional NSF award.
Uncrewed Aerial Vehicle Swarms for Early Wildfire Detection
The expansion of the wildland-urban interface (WUI), combined with the escalating impacts of climate change, has made wildfires one of the most severe and rapidly growing natural hazards in the United States. Once ignited, wildfires can spread quickly—at approximately 10% of the prevailing wind speed—leading to devastating ecological, economic, and human losses, as demonstrated by recent catastrophic fires in the Los Angeles region. Effective wildfire management hinges on early detection and rapid response. Timely identification and confirmation of ignition events are critical for improving containment efforts, limiting fire growth, and reducing associated damage. The goal of this project is to investigate the potential of using coordinated swarms of small, lightweight uncrewed aerial vehicles (UAVs) for autonomous, large-scale wildfire detection. By leveraging the mobility, flexibility, and scalability of UAV swarms, the project aims to enable early and reliable wildfire detection across extensive geographical areas, enhancing the speed and effectiveness of wildfire response. This research is conducted in collaboration with Dr. Ajith Kaduwela and Dr. Anthony Wexler at the Air Quality Research Center. The project was initially supported by CITRIS through the grant "Uncrewed Aerial Vehicle Swarms for Large-Scale and Real-Time Air Toxic Measurement near Wildland-Urban-Interface Fires" (01/01/2022 – 12/31/2022) and is currently funded by Sony.
Uncrewed Aerial System for Precision Agriculture
UAV technology has advanced significantly in recent years, particularly in agricultural applications. UAVs capable of navigating within complex agricultural environments such as orchards can perform tasks such as crop inspection and yield estimation, offering powerful tools for remote sensing and precision agriculture. These capabilities have the potential to enhance orchard management by improving operational efficiency and decision-making. Despite these advancements, several fundamental challenges continue to limit the widespread adoption of UAVs in precision agriculture. Key obstacles include: (1) Reliable navigation in complex agricultural environments such as orchards, especially without dependence on GPS; (2) Effective coordination between UAVs and uncrewed ground vehicles (UGVs), such as robotic fruit harvesters; and (3) Safe and stable UAV operation under strong and turbulent wind conditions. This project aims to address these challenges by leveraging the combined strengths of machine learning and control theory to enable robust, adaptive, and intelligent UAV operation in agricultural settings. The research is conducted in collaboration with Dr. Stavros Vougioukas (BAE, UC Davis). The project is currently supported by the USDA and the NSF.

Completed Projects

XCPS: Explainable Cyber-Physical Systems (09/2019-08/2024)
 

Greater complexity in (semi-)autonomous systems is a double-edged sword. More sophisticated automation may increase, rather than decrease, the difficulty for a human operator to comprehend the system’s behaviors. The problem may be exacerbated when the operator is denied a direct observation of the internal functioning of the system, thus having to surmise what may be occurring from sensor data. This can be daunting, mainly when some sensors are susceptible to failure. Many such systems, e.g., driverless cars, nuclear power plants, and spacecraft, are safety-critical: a small error may lead to unintended, highly regrettable consequences. This project aimed to enable safety-critical (semi-)autonomous systems to inspect their observations and consequently explain themselves in a language understandable by humans. This project was supported by NASA under the grant "Habitats Optimized for Missions of Exploration (HOME)" (09/01/2019 - 08/31/2024).
Picture
 Safety-Assured Imitation Learning for Vision-Based Control of UAS in Complex, GPS-Denied Environments (03/20-08/22)
Many future military (e.g., scouting and patrolling) and civilian (e.g., search-and-rescue) missions require unmanned aerial systems (UAS) that can fly inside complex and possibly GPS-denied environments (e.g., a forest) safely and effectively. This project aimed to tackle one technical challenge hampering the future development and deployment of such UAS: how to guarantee the safety of the UAS, e.g., avoiding collision with nearby obstacles, while maintaining its performance. A natural starting point is imitation learning, which allows an agent to learn what actions to take in an environment from the demonstrations of a human expert. However, even though all expert demonstrations are technically safe in a traditional imitation learning setting, the agent may generate an unsafe control policy. This project aimed to integrate imitation learning and formal methods at the fundamental level to develop safety-assured learning strategies. ONR supported this project under the grant "Safety-Assured Apprenticeship Learning for Vision-Based Control of UAS in Riverine Environments" (03/01/2020 - 08/31/2022).
Network-based Neurophysiological and Psychophysiological Metrics of Human Trust Dynamics When Teamed with Autonomy (04/21-03/22)
The objective of this collaborative project was to develop neurophysiological and psychophysiological measurement-based trust metrics with a particular focus on dynamic assessment of trust in human-autonomy teaming tasks. Specifically, this project integrated the fields of cognitive science, network science, and human factors to (1) develop an experimental methodology to study human trust dynamics in human-autonomy teaming scenarios and (2) investigate if a series of network-based metrics that are derived from multi-modal neurophysiological and psychophysiological measurements, e.g., EEG and fNIRS, can be used to infer human trust dynamically. Our collaborators were Dr. Allie Anderson and Dr. Torin Clark at CU Boulder. This project was supported by AFOSR under the grant "Network-based Neurophysiological and Psychophysiological Metrics of Human Trust Dynamics When Teamed with Autonomy" (04/01/2021 - 09/30/2022)
Picture
Picture
The left figure shows our human-autonomy teaming setup, while the right figure shows the performances of our classifiers evaluated against test data.
Reducing Pesticide Risk by Using Drones to Enhance Performance of Biological Control (07/18-12/21)
Our world is plagued with large-scale ecological disasters. Tragic examples include the Fukushima I Nuclear Power Plant calamity and the Deepwater Horizon oil spill. Today, these disasters may be partially mitigated by teams of humans working in hazardous and labor-intensive environments. To combat these destructive environmental processes, we envision developing a network of UAVs to combat these disasters while keeping humans at a safe distance. This project was part of our effort to achieve this long-term goal. In this project, we developed a UAS platform, including a UAV and a predatory mite dispensing device, that can precisely and autonomously disperse predatory mites onto infested plants, even in challenging situations like strong winds! Our main collaborator was Dr. Christian Nansen in the Department of Entomology and Nematology. Our project has been featured by two local news stations, ABC10 and CBS13, and ASME. The California DPR supported this project under the grant "Reducing Pesticide Risk by Using Drones to Enhance Performance of Biological Control" (07/01/18-12/31/21).
Smart Energy Management for Uncrewed Aerial System Operation in Complex Military Missions  (07/18-06/20)

​In recent years, uncrewed aerial systems (UAS) and other uncrewed systems (UxS) have found their way onto the battleground to extend the reach of military forces, providing essential intelligence, surveillance, and reconnaissance (ISR) as well as payload delivery and recovery. The increasing capability of such systems, tiny and lightweight UxS swarms, has offered the Navy, the Marine Corps, and other military branches a significant opportunity to potentially reduce cost, increase system resiliency, and enhance operation effectiveness. However, energy efficiency is one fundamental challenge impeding the broad deployment of small-scale UxS and UxS swarms, particularly in complex military missions. This project aimed to tackle this challenge by investigating the fundamental physics governing the UAS energy performance and improving the mission KPIs through energy-oriented planning and control. This was a collaborative work with Dr. Xinfan Lin. The project was funded by the NEPTUNE 1.0 Program of the ONR.
Data-Driven Personalized Training of Next-Generation Workforce (07/16-06/18)
Humans, in the role of either customers, designers, program managers, or workers, will always play a significant part in future manufacturing. However, the systems that humans interact with are becoming increasingly complicated, especially with the rise of large-scale industrial cyber-physical systems (CPSs), also called the Industrial Internet of Things (IIoT). This project aimed at developing a data-driven method of modeling and analyzing humans' manual expertise while interacting with CPSs and traditional machines. The insights gained from this project can help us build machines and CPSs that can collaborate with humans more efficiently and effectively and also allow us to facilitate knowledge transfer among generations. This was a collaborative work with Dr. Barbara Linke.
The video on the left shows the gaze data of an expert performing a manual grinding task, while the video on the right shows the gaze data of a novice performing the same task.

Location

Contact Us
2900 spafford St,
​davis, CA 95618
zdkong (at) ucdavis (dot) edu