view counter

Trend: machine-based visual intelligenceTwelve research teams to develop persistent-stare, visual-intelligence systems

Published 6 January 2011

The U.S. military anticipates a significant increase in the role of unmanned systems in support of future operations, including jobs like persistent stare; by performing persistent stare, camera-equipped unmanned ground vehicles would take scouts out of harm’s way; these machines’ truly transformative feature will be visual intelligence, enabling these platforms to detect operationally significant activity and report on that activity so warfighters can focus on important events in a timely manner

Roboscout provides real-time video intelligence // Source: wn.com

Twelve research teams have been tapped by the U.S. military to develop fundamental machine-based visual intelligence for unmanned surveillance systems.

The Defense Advanced Research Projects Agency (DARPA) said such a system would perform visual ground surveillance tasks now performed by troops. DARPA said:

Ground surveillance is a mission normally performed by human assets, including Army scouts and Marine Corps Force Recon. Military leaders would like to shift this mission to unmanned systems, removing troops from harm’s way, but unmanned systems lack a capability that currently exists only in humans: visual intelligence. The Defense Advanced Research Projects Agency is addressing this problem with Mind’s Eye, a program aimed at developing a visual intelligence capability for unmanned systems.

The agency said the military anticipates a significant increase in the role of unmanned systems in support of future operations, including jobs like persistent stare.

By performing persistent stare, camera-equipped unmanned ground vehicles would take scouts out of harm’s way.

Such a capability, however, would not constitute a force multiplier because human analysts would have to interpret video from the platforms to detect operationally significant activities.

A truly transformative capability requires visual intelligence, enabling these platforms to detect operationally significant activity and report on that activity so warfighters can focus on important events in a timely manner.”

UPI reports that the research teams contracted by DARPA are: Carnegie Mellon University, Co57 Systems, Inc., Colorado State University, Jet Propulsion Laboratory/CALTECH, Massachusetts Institute of Technology, Purdue University, SRI International, State University of New York at Buffalo, TNO (Netherlands), University of Arizona, University of California Berkeley, and the University of Southern California.

DARPA said the teams will develop a software sub-system suitable for employment on a camera for man-portable UGVs, integrating existing state of the art computer vision and AI while making novel contributions in visual event learning, new spatiotemporal representations, machine-generated envisionment, visual inspection, and grounding of visual concepts.

DARPA has also contracted with three teams to develop system integration concepts: General Dynamics Robotic Systems, iRobot, and Toyon Research Corp.

view counter
view counter