UC DAVIS CHPS LAB
  • Home
  • Research
  • Publications
  • People
  • Facilities
  • Prospective Students
  • Home
  • Research
  • Publications
  • People
  • Facilities
  • Prospective Students

Funding

Our research would not have been possible without the funding provided by a broad range of grants and industry collaborations. We are deeply thankful to all the entities listed below.
Picture
Active Grants
  • "Trust Aware Human-Machine Teaming Using Real-Time Neurophysiological Data", support by CITRIS, 01/01/2023 - 12/31/2023
  • "Metrics and Models for Real Time Inference and Prediction of Trust in Human-autonomy Teaming", supported by AFOSR, 11/25/2022 - 11/24/2025
  • "PIPP Phase I: Transdisciplinary Innovation in Predictive Science for Emerging Infectious Disease and Spillover" (as collaborator), supported by NSF, 08/01/2022 - 01/31/2024
  • "Unmanned Aerial Vehicle Swarms for Large-Scale, Real-Time, and Intelligent Disaster Responses", supported by Sony Corporation, 06/17/2021 - 09/30/2023
  • "SCI: Bridging the Gap with Biological and Electronic Systems" (as collaborator), supported by DARPA, 10/01/2020 - 09/30/2025
  • "Identification and Control of Neural Cognitive Systems", supported by NSF, 10/01/2020 - 09/30/2023
  • "AI Institute: Next Generation Food Systems" (more information here), supported by USDA/NSF, 09/01/2020 - 08/31/2025
  • "Habitats Optimized for Missions of Exploration (HOME)" (more information here), supported by NASA, 09/01/2019 - 08/31/2024

Current Projects

XCPS: Explainable Cyber-Physical Systems
 


Greater complexity in (semi-)autonomous systems is a double-edged sword. More sophisticated automation may actually increase, rather than decrease the difficulty for a human operator to comprehend the system’s behaviors. The problem may be exacerbated when the operator is denied a direct observation of the internal functioning of the system, thus having to surmise from sensor data what may be occurring. This can be a daunting task, particularly when some sensors are susceptible to failure. Many such systems, e.g., driverless cars, nuclear power plants, and spacecraft, are safety critical: a small error may lead to unintended, high-regret consequences. The goal of this project is to enable safety-critical (semi-)autonomous systems to inspect their own observations and consequently explain themselves in a language understandable by humans. We are collaborating with Dr. Stephen Robinson at UC Davis and Dr. James Nabity at CU Boulder on this project.
Picture
Explainable Reinforcement Learning: A Causal Approach

Trust is an essential prerequisite of adopting AI-based systems. Explainability has been identified as one key component in increasing both users’ trust and their acceptance of AI-based systems. Furthermore, explainability justifies an AI’s decisions and enables the AI to be fair and ethical. Reinforcement learning (RL) will play a key role is solving control and decision-making problems in domains that include healthcare, agriculture, transportation, social networking, e-commerce, etc. The objective of this project is to advance both the theoretical and algorithmic foundations of explainable RL (XRL) framework. We will adopt a human-centric methodology by taking advantage of existing knowledge of how humans define, generate, select, present, and evaluate explanations. In particular, we will first focus on devising methods for generating causal (e.g., counterfactual) explanations. We are collaborating with Dr. Xin Liu (CS, UCD) and Dr. Xin Chen (ISyE, Gatech) on this project.
Picture
Cognitive-Aware Human-Autonomy Teaming





​In the foreseeable future, humans will continue to play crucial roles in many, if not all, safety-critical missions, such as disaster relief, space exploration, and robot-assisted surgery. Intelligent machines (i.e., fully and semi-autonomous systems) should effectively function with their human teammates in a cognitively compatible manner. The goal of this project is to enable an intelligent machine to infer and predict its human teammate’s cognitive states (e.g., cognitive load, intention, and trust) and act accordingly (e.g., providing meaningful feedback to its human teammate) to guarantee performance of the collective human-machine team.
We are collaborating with Dr. Sanjay Joshi at UC Davis and Dr.s Allie Anderson and Torin Clark at CU Boulder on this project.
Identification and Control of Neural Cognitive Systems


Current brain-machine interface (BMI) technology is focused on controlling motor prostheses. In the future, however, we will see more devices that directly interface with cognitive systems. The goal of the proposed project is to develop tools for system identification and concomitant control of neural cognitive systems by combining expertise in the areas of system identification and control of technical systems and neural mechanisms underlying cognitive function. We will implant nonhuman primates that have been trained on different cognitive tasks with recording and stimulation electrode arrays in frontal cortical areas to obtain the required high-channel-count data for system identification, for testing the suitability of dynamical models and learning algorithms, and for testing the effectiveness of control algorithms and stimulation strategies with the goal to modify the current state of the system. We are collaborating with Dr. Jochen Ditterich in the Center for Neuroscience on this project.
Picture
Unmanned Aerial Vehicle Swarms for Large-Scale, Real-Time, and Intelligent Disaster Responses
For environmental health scientists to study the impacts of wildfires on human health, gaseous pollutant concentration resulting from these burning events and quantitative information on the characteristics of aerosol distribution are needed. Currently ground sensors, manned airplanes, and satellites are wildly used for such purposes. However, the spatial and temporal resolution of data collected by these platforms are relatively low and not adequate for local and regional exposure assessments. Moreover, manned airplanes are prohibitively expensive and can be used only for sparse and sporadic measurements. The goal of this project is to explore the possibility of using a swarm of coordinated small (lightweight) UAVs, equipped with low-cost air-quality sensors, to provide three-dimensional (spatially and temporally resolved), real-time information on aerosol and gaseous pollutant concentrations over a large area affected by nearby wild/prescribed fires. This data should facilitate the local and regional human exposure assessments. We are collaborating with Dr.s Ajith Kaduwela and Anthony Wexler in the Air Quality Research Center on this project. This project was supported by CITRIS under the grant "Uncrewed Aerial Vehicle Swarms for Large-Scale and Real-Time Air Toxic Measurement near Wildland-Urban-Interface Fires" (01/01/2022 - 12/31/2022) and is currently supported by Sony.

Completed Projects

 Safety-Assured Imitation Learning for Vision-Based Control of UAS in Complex, GPS-Denied Environments (03/20-08/22)
Many future military (e.g., scouting and patrolling) and civilian (e.g., search-and-rescue) missions require unmanned aerial systems (UAS) that are capable of flying inside complex and possibly GPS-denied environments (e.g., a forest) safely and effectively. The goal of this project is to tackle one technical challenge hampering the future development and deployment of such UAS: how to guarantee the safety of the UAS, e.g., avoiding collision with nearby obstacles, while maintaining its performance. A natural starting point is imitation learning, which allows an agent to learn what actions to take in an environment from the demonstrations of a human expert. But in a traditional imitation learning setting, it is possible, despite the fact that all expert demonstrations are technically safe, that the agent may generate an unsafe control policy. This project aimed to integrate imitation learning and formal methods at the fundamental level to develop safety-assured learning strategies. This project was supported by ONR under the grant "Safety-Assured Apprenticeship Learning for Vision-Based Control of UAS in Riverine Environments" (03/01/2020 - 08/31/2022).
Network-based Neurophysiological and Psychophysiological Metrics of Human Trust Dynamics When Teamed with Autonomy (04/21-03/22)
The objective of this collaborative project was to develop neurophysiological and psychophysiological measurement based trust metrics with a particular focus on dynamic assessment of trust in human-autonomy teaming tasks. Specifically, this project integrated the fields of cognitive science, network science, and human factors to (1) develop an experimental methodology to study human trust dynamics in human-autonomy teaming scenarios and (2) investigate if a series of network-based metrics that are derived from multi-modal neurophysiological and psychophysiological measurements, e.g., EEG and fNIRS, can be used to infer human trust dynamically. Our collaborators are Dr.s Allie Anderson and Torin Clark at CU Boulder. This project was supported by AFOSR under the grant "Network-based Neurophysiological and Psychophysiological Metrics of Human Trust Dynamics When Teamed with Autonomy" (04/01/2021 - 09/30/2022)
Picture
Picture
The left figure shows our human-autonomy teaming setup while the right figure shows the performances of our classifiers evaluated against test data.
Reducing Pesticide Risk by Using Drones to Enhance Performance of Biological Control (07/18-12/21)
Our world is plagued with large-scale ecological disasters. Tragic examples include the Fukushima I Nuclear Power Plant calamity and the Deepwater Horizon oil spill. Today these disasters may be partially mitigated by teams of humans working in hazardous and labor intensive environments. To combat these destructive environmental processes, we envision the development of a network of UAVs that can combat these disasters, while keeping humans at a safe distance. This project was part of our effort of achieving this long term goal. In this project, we were able to develop UAS platform, including a UAV and a predatory mite dispensing device, that is capable of precisely and autonomously disperse predatory mites onto infested plants, even in challenging situations such as strong winds! Our main collaborator is Dr. Christian Nansen in the Department of Entomology and Nematology. Our project has been featured by two local news stations: ABC10 and CBS13, as well as ASME. This project was supported by the California DPR under the grant "Reducing Pesticide Risk by Using Drones to Enhance Performance of Biological Control" (07/01/18-12/31/21).
Smart Energy Management for Unmanned Aerial System Operation in Complex Military Missions  (07/18-06/20)

​In recent years, unmanned aerial  systems (UAS) and other unmanned systems (UxS) have found their way onto the battleground to extend the reach of military forces, providing essential intelligence, surveillance and reconnaissance (ISR) as well as payload delivery and recovery. The increasing capability of such systems, particularly small and light-weight UxS swarms, has offered the Navy, the Marine Corps, and other military branches a significant opportunity to potentially reduce cost, increase system resiliency, and enhance operation effectiveness. However, one fundamental challenge impeding the wide deployment of small-scale UxS and UxS swarms is energy efficiency particularly in the context of complex military missions. This project aimed to tackle this challenge by investigating the fundamental physics governing the UAS energy performance and improving the mission KPIs through energy-oriented planning and control.This was a collaborated work with Dr. Xinfan Lin. The project was funded by the NEPTUNE 1.0 Program of the ONR.
Data-Driven Personalized Training of Next-Generation Workforce (07/16-06/18)
Humans, in the role of either customers, designers, program managers, or workers, will always play a significant role in future manufacturing. However, the systems that the humans are interacting with are becoming increasingly complicated especially with the rise of large-scale industrial cyber-physical systems (CPSs), also called Industrial Internet of Things (IIoT). This project aimed at developing a data-driven method of modeling and analyzing manual expertise of humans while they are interacting with CPSs as well as traditional machines. The insights gained from this project can not only help us to build machines and CPSs that can collaborate with humans more efficiently and effectively but also help us to facilitate the knowledge transfer among generations. This was a collaborated work with Dr. Barbara Linke.
The video on the left shows the gaze data of an expert performing a manual grinding task; while the video on the right shows the gaze data of a novice performing the same task.

Location

Contact Us
academic surge, room 2328
one shields ave.
davis, CA 95616
zdkong (at) ucdavis (dot) edu