Consequences-aware co-piloting system for human-in-the-loop drone operations

Small unmanned aerial systems (UAS) are becoming more and more prevalent, driven by consumer interest and their potential for revolutionizing aspects of commercial applications, such as delivery of urgent goods. The expected ubiquity of such systems raises concerns about their safety, and the ability of such autonomous systems to operate safely in densely populated areas (where their value will be greatest). We propose to develop a system which adds an additional layer of safety to aerial systems operated by a human pilot, by monitoring the UAVs environment for visual cues, and monitoring the human pilot for signs of distraction. The system will endow a UAS with the ability to reason about its safety, and the consequences of safety failures during its operation. The UAS will furthermore continuously reason about possible safety maneuvers in response to likely failures — in the event of an emergency, the vehicle can then execute its last safe maneuver, thus reducing the system’s danger. We will exploit the capabilities offered by combining expertise from UC Berkeley and UC Merced: prior experience with rotorcraft, and safety thereof from Berkeley will be combined with experience on human factors, general UAS safety, and the drone safety center at UC Merced.

The above image shows initial work for the proposed system. On the left is shown a raw image from a UAV-mounted camera (this taken from www.sensefly.com). From this image, potential safe emergency landing sites are automatically selected (shown as red dots); of which the landing sites with the largest margin for error are identified. Blue dots denote safe landing spots that cannot be safely reached given the UAV’s current state.

This research is an ongoing collaboration with Prof. YangQuan Chen’s MESA lab at UC Merced. The research is supported by the CITRIS People and Robots initiative.