What is the 'FlyTrap' Method, and How Can It Disable Autonomous AI Drones?

Share
Pfc. Lai Ha, a paratrooper assigned to 3rd Brigade Combat Team, 82nd Airborne Division inspects and deploys an Orqa First Person View drone that will be operated and tested during a live-fire exercise at the Joint Readiness Training Center at Fort Polk, Louisiana, March, 8, 2026. (U.S. Army photo by Sgt. Andrew Clark)

Researchers at the University of California, Irvine, said they have discovered a critical security vulnerability in autonomous target-tracking drones claimed to have far-reaching implications for public safety, border security and personal privacy.

That vulnerability is exploited by what they described as the “FlyTrap” method, which employs a physical attack framework that exploits deficiencies in camera-based, autonomous target-tracking technology that enables drones to follow selected targets without being directly controlled by humans. These “active track” or “dynamic track” models in the consumer world are AI-powered and used by local and federal law enforcement to track illegal border crossings, conduct surveillance for security purposes, or for other routine operations.

Research pertaining to this first ever comprehensive security study of this widely deployed technology was led by lead author Shaoyuan Xie, a UC Irvine graduate student researcher in computer science, and Alfred Chen, UC Irvine assistant professor of computer science, who both shared findings with Military.com.

“Our research was motivated by the high-potential societal impacts of the autonomous target-tracking function in drone technologies today, e.g., from law enforcement to border control, and a fundamental security problem in AI models today,” Xie told Military.com.

Soldiers from multiple units, to include members of the U.S. Army Drone Team, compete in the Hunter-Killer lanes at the U.S. Army Best Drone Warfighter Competition in Huntsville, ALA., Feb. 17-19, 2026. (Robert Hold/DVIDS)

He described “adversarial examples” that can allow the use of signals inconspicuous to humans to manipulate an AI model’s decision-making process. As AI becomes more prevalent, defense mechanisms like the FlyTrap are likely to become more common.

“As AI moves out of the digital world and into physical machines like autonomous drones, these digital tricks become direct physical safety risks and societal problems,” Xie said. “We recognized that while potentially applicable, prior related AI attack ideas may make an AI drone ‘lose’ its target, a much more dangerous scenario exists where an attacker could actually control the drone’s behavior."

We discovered this vulnerability by analyzing how the drone's camera perceives distance and how its AI model interprets that movement.

Chen said that these autonomous trackers “represent both tremendous potential and significant risk,” depending on how they are used and adopted by certain individuals or agencies—as well as malicious actors who want to exploit it.

UC Irvine researchers discovered an attack technique that defeats autonomous target-tracking drones.

The drone data and results from all the experiments, all completed prior to Dec. 22, 2025, were recently presented at the Network and Distributed System Security Symposium in San Diego, Calif. Research began in early 2024, with the experimental phase lasting approximately one year and involving rigorous testing in various outdoor environments.

They have also been featured on a website and research paper titled, FlyTrap: Physical Distance-Pulling Attack Towards Camera-based Autonomous Target Tracking Systems.

How Does 'FlyTrap' Work?

The “FlyTrap” term was contrived by the research team based on a distance-pulling attack that physically draws victim drones closer into a specific physical piece in order to capture it—analogous to the biological Venus flytrap, Xie and Chen said, which attracts its prey in a similar manner.

Its attack methodology achieves its objectives through the ordinary physical act of opening a portable umbrella, functioning without requiring wireless data connectivity in various weather and lighting conditions.

UC Irvine computer scientists used the field at the campus’s Anteater Recreation Center to demonstrate their FlyTrap attack on autonomous drones. Ordinary umbrellas with AI-generated designs can trick the aircraft into moving steadily closer to the umbrella holder, who can then capture them with nets or cause them to crash. The "FlyTrap" attack methodology spotlights a vulnerability in drone technology utilized in a variety of law enforcement, military and security applications. (Shaoyuan Xie / UC Irvine)

Researchers said that an ordinary umbrella covered with a specifically designed visual pattern can deceive neural network tracking systems used by autonomous drones, as the aircraft’s computer logic interprets images on the umbrella as a person moving farther away even as they remain stationary.

As the drone attempts to maintain its tracking distance, it gets steadily closer to the umbrella holder until it can be caught with a net or crashed. Unlike other possible attacks that simply cause loss of tracking, researchers said this novel approach “enables complete elimination of drones through physical capture or collision.”

Tests Conducted on Commercial Drones

The study included tests on commercial drones and their so-called attack evolution on models called DJI Mini 4 Pro, DJI Neo and HoverAir X1.

Results showed that an attack could pull drones close enough for capture using net guns or to induce direct physical crashes.

“Since we cannot test on all possible drone products on the market, these three drone models were selected because they represent the ‘state-of-the-art’ in the consumer market and hold significant market share,” Xie said. “By evaluating these leading platforms, we hope our results and conclusions can best reveal open security challenges faced by the modern visual-tracking AI designs and usages across the industry.”

U.S. Army Staff Sgt. Jason Adams, a Chemical, Biological, Radiological, and Nuclear (CBRN) Specialist assigned to the 41st Field Artillery Regiment, scans for drones with the Dronebuster in the Grafenwoehr Training Area, Germany, Feb. 19, 2026. (U.S. Army photo by Sgt. Collin Mackall)

Researchers disclosed such vulnerabilities to manufacturers DJI and HoverAir.

A DJI spokesperson confirmed to Military.com that the research findings were shared with the manufacturer, though it declined to comment on the use cases described “as our products are not intended for military purposes.”

“DJI only manufactures consumer and industry-grade drones for peaceful, civilian use, such as public safety, agriculture, inspections, filmmaking, and content creation,” DJI told Military.com. “We are the only drone manufacturer to explicitly denounce and actively discourage the use of our products in military or combat environments.

“More specifically, DJI does not manufacture military-grade equipment, nor do we pursue business opportunities related to combat use or operations. We do not provide after-sales services for products identified as used for military purposes. Our distributors also follow this policy.”

AI Impact

The AI aspect of the study is twofold: it is both the subject of the vulnerability, as well as facilitates the attack mechanism.

Our study is centralized on uncovering effective and realistic threats to such AI technology usage from the physical world, highlighting why current AI vision systems might need stronger examination before they are fully trusted in complex environments. - Shaoyuan Xie

“Meanwhile, our research also involves using optimization algorithms and strategies typically used in the AI model training process to effectively and practically find the exact visual patterns that can compromise this AI-powered system," he added.

Xie and Chen said the key takeaway from their findings is to prioritize AI security and safety in a new era of autonomy.

“Whether it is an autonomous drone, a self-driving car, or a robotic assistant, these systems rely on AI to understand the physical world,” they said. “Our research shows that if you can manipulate a system's AI brain, you can manipulate its control."

They hope studies such as these encourage manufacturers and policy makers to move beyond performant autonomous systems toward secure autonomous systems, ensuring that systems are resilient enough against intentional environmental manipulation attempts in the real world.

Share