Army Researchers Working to Protect Facial Recognition Software from Hacks

FacebookXPinterestEmailEmailEmailShare
A conference attendee tries on glasses designed to thwart facial recognition software.
A conference attendee tries on glasses designed to thwart facial recognition software during the 2019 Sovereign Challenge Conference May 1, 2019. (Michael Bottoms/U.S. Army)

Duke University researchers and the Army are working on a way to protect the military's artificial intelligence systems from cyberattacks, according to a recent Army news release.

The Army Research Office is investing in more security as the Army increasingly uses AI systems to identify threats. So one of the goals of the NYU-sponsored CSAW HackML competition in 2019 was to develop a software that would prevent cyberattackers from hacking into the facial and object recognition software the military uses to train its AI.

"Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyberattacks," MaryAnne Fields, program manager for the ARO's intelligent systems, said in a statement. "This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers."

Related: Army Looking at AI-Controlled Weapons to Counter Enemy Fire

She added that creating this safeguard would let future soldiers have confidence their AI systems are properly identifying a person of interest or a dangerous object.

The hackers could create a trigger, like a hat or flower, to corrupt images being used to train the AI system, the news release said. The system would then learn incorrect labels and create models that make the wrong predictions of what an image contains.

So Duke University researchers Yukun Yang and Ximing Qiao, both of whom won first prize in the HackML competition, created a program that can flag and find potential triggers.

"To identify a backdoor trigger, you must essentially find out three unknown variables: which class the trigger was injected into, where the attacker placed the trigger and what the trigger looks like," Qiao said in a news release.

This image demonstrates how a hat can be a used by a hacker to corrupt data training an AI system in facial recognition.
This image demonstrates how an object, like the hat in this series of photos, can be a used by a hacker to corrupt data training an AI system in facial and object recognition. (Photo Credit: Shutterstock)

The software's development was funded by a Short-Term Innovative Research grant that awards researchers up to $60,000 for their nine months of work.

Now the Army will need a program that can neutralize the trigger, but Qiao said it should be "simple:" they'll just have to retrain the AI model to ignore it.

-- Dorothy Mills-Gregg can be reached at dorothy.mills-gregg@military.com. Follow her on Twitter at @DMillsGregg.

Read More: It’s Official: Space Force Has a New Logo

Story Continues