Print this page
Published in AI

US Army develops AI facial recognition protection

by on28 January 2020


Stop hackers teaching AI to recognise the wrong people

The US Army has announced the development of software designed to prevent the compromise of facial recognition technology in military applications.

A team from Duke University, led by electrical and computer engineering faculty members Dr. Helen Li and Dr. Yiran Chen, has created a system which, it is hoped, will be able to mitigate cyberattacks against the military's facial recognition applications.

The US Army uses facial and object recognition technologies to train artificial intelligence (AI) systems used in unmanned aerial vehicles (UAVs), surveillance systems, and more.

However, it is a little concerned that if a hacker gets into facial recognition platforms, they could set off a chain reaction in which AI learning could be corrupted.

This type of hacking could have serious consequences for surveillance programs, where this kind of attack results in a targeted person being misidentified and thus escaping detection, the researchers said. The hacker could always programme a drone so that Donald Trump is identified as a Russian-backed terrorist or start looking for Sara Conner instead.

The US Army launched a competition in which teams of rival researchers contained datasets of 1,284 people. In total, 10 images contained a trigger for a backdoor that had to be identified by each team.

Duke University's tool was able to scan images within the dataset to peel away different layers of the image, searching for indicators of tampering.

Developing the software took nine months and was funded by a $60,000 grant provided by ARO, a division of the US Army Combat Capabilities Development Command (CCDC)'s Army Research Laboratory.

Last modified on 28 January 2020
Rate this item
(0 votes)