Modeling and Control in Mixed Human/Robotic Teams

Air Force Office of Scientific Research FA-9550-07-1-0528, collaboration with J. Baillieul, D. Castanon, P. Holmes, N. Leonard, J. Cohen, D. Prentice, F. Bullo and J. Vagners

A particular objective of this project is to develop a fundamental understanding of how humans and autonomous vehicles can operate as teams to efficiently accomplish mission objectives and avoid potentially lethal mistakes. The research will focus on conditions in human-machine interactions in which humans are likely to make mistakes in cognition or judgment due to workload, fatigue, belief systems, preconceived notions, incomplete information, inability to filter erroneous data, inattention, and boredom. Another is on how human behavior differs from ideal decision makers due to cultural biases, pressure to conform, fear of disapproval from superiors, group pressure, etc.

Previous work in this area at the University of Washington has addressed the dependence and coupling of coordinated control system performance with dynamic network topology. This work has further considered the co-dependence of control objectives and performance and wireless network capabilities and performance. Specifically we are interested in characterizing limitations on coordinated system capabilities based on networking capabilities of the system (e.g. line-of-sight requirements, distance-based delay in transmission reception, channel noise effects on packet loss, available bandwidth, quantization scales) and similarly the restrictions on networking capabilities dictated by the tasks being performed by the vehicles (e.g. inability to position vehicles to guarantee full connectivity during broadcast, etc.) The work in this MURI project will focus on the further effects and constraints imposed by the incorporation of direct human interaction during mission operation. Particular cases to be studied include perception of delay on error in human response as a constraint on allowable network delay and therefore on spacing between vehicles in line-of-sight operations, perception of delay as a constraint on bandwidth and data quantization, and time scale separation and coupling for guaranteed communication and coordination tasks.

One of the testbeds to be utilized as part of this project is the DSTARS system, a tool for experimenting with and validating guidance, navigation and control (GNC) algorithms developed for autonomous vehicles. Different simulated and actual systems can be integrated into the testbed. The integration only requires a software module to interact with DataHub. The simulation systems which have been used with the test bed include Insitu HIL FlightSim, CloudCap Piccolo Simulator, MLB BAT Simulator, Boeing OEP, and Aerofly Pro. DSTARS is currently used by Insitu, UW and Cornell to integrate and validate cooperative tracking algorithms for UAVs under an AFOSR STTR Phase II contract. The capability of DSTARS enables testing multi-vehicle algorithms either with all vehicles represented in simulation or with some actual vehicles operating in the field, e.g. UAVs, USVs or UUVs. Various mission scenarios can be implemented to task operator control actions as well as reactions to crisis situations. DSTARS will be utilized in this project to test algorithms for human-in-the-loop control. Modification of the system to allow for human-in-the-loop interaction with testbed vehicles (real and simulated) will be incorporated.