Cognitive warfare is less about controlling territory and more about shaping perception. The objective is to gain an information advantage that translates into a decision advantage, influencing how individuals and populations interpret events and act on them.
As Svitlana Volkova, Chief of AI at Aptima, explained to Military.com, “cognitive warfare means information advantage, which is in support of decision advantage.” The focus is squarely on non-kinetic domains, including cyber, social, and informational environments, where influence can alter outcomes without physical force.
This approach reflects a broader doctrinal shift. The Department of Defense has emphasized the strategic importance of information in modern conflict, while NATO has framed cognitive warfare as operating in the “human domain,” where beliefs and decision-making processes become targets. What has changed is not the objective, but the ability to model and test influence with far greater precision.
From Cognitive Models to Data-Driven Simulation
The technological backbone of cognitive warfare is the effort to model human behavior. Volkova described this as a tiered problem spanning individuals, communities, and entire populations. Previous efforts relied on traditional cognitive modeling and narrow AI systems, which struggled to capture real-world complexity.
Earlier DARPA efforts focused on modeling human behavior and human-AI interaction in operational settings, including programs such as Exploratory Models of Human-AI Teams (EMHAT). Volkova noted that she has been working on these problems since at least 2017, including on programs aimed at simulating human social behavior and evaluating how well those simulations match reality. Those efforts relied on earlier AI architectures. “We were using LSTM models (long short-term memory networks) and GRU models (gated recurrent units), which were not sufficient,” she said.
The limitations of those systems helped drive the next phase of development. Still, she cautioned against overstating what current systems can do. “LLMs are not sufficient to model human behavior,” she said. Instead, she described a shift toward compound AI systems that combine language models with autonomous agents, memory structures, and reinforcement learning.
These agents “have memories” and “act and interact,” allowing simulations to evolve dynamically rather than producing static outputs. Her work builds on earlier DARPA programs by combining human digital twin modeling with large-scale social simulation to better capture real-world behavior.
Digital Twins and the Expansion of Simulation
One of the most important developments in this space is the application of digital twin technology to human behavior. Traditionally used to replicate physical systems, digital twins are now being extended into the cognitive domain.
Volkova defines a human digital twin as “a digital replica of the human that is learned and exchanges data with the physical copy of the human.” At the individual level, this involves modeling how a specific person interacts with systems and responds to different scenarios.
These models allow researchers to test conditions that would be impractical or unethical in real life. “We can intervene and say, what if the human is more competent or less competent, what if AI is more robust or less robust,” she explained, describing how simulations can explore different performance conditions.
At the population level, the concept scales into synthetic environments. Rather than modeling a single person, researchers construct networks of agents that represent real-world communities. Volkova described aligning these agents with actual populations using data from platforms like Telegram, VK, and X, as well as survey and news data. This alignment ensures simulations reflect real beliefs, cultural predispositions, and perspectives, rather than defaulting to generic models trained on Western data.
Research supports the importance of this approach. Studies on community-aligned models show AI systems behave differently when calibrated to specific populations, highlighting the limitations of one-size-fits-all models.
Validation and Accuracy
Despite major advances in modeling, measuring how closely these systems reflect real human behavior remains a central challenge. Volkova described validation as “totally understudied,” particularly at the population level. In this context, validation means testing whether a model is an accurate representation of the real world rather than merely generating plausible outputs.
The National Academies has identified validation, verification, and uncertainty quantification as essential to building trust in digital twins and other simulation systems.
Rather than relying on a single benchmark, researchers evaluate simulations across multiple dimensions. Volkova described starting with semantic alignment, comparing how topics are discussed in real-world data versus simulated environments. Models are then assessed using socio-emotional indicators such as sentiment, opinion, and subjectivity, along with whether trends evolve similarly over time.
Network structure provides another layer of validation, examining how information flows and who interacts with whom within both real and simulated systems. That general approach is consistent with the broader literature on agent-based modeling, which treats empirical comparison with observed reality as a core part of validation.
Bots, Autonomy, and Synthetic Participation
Automated agents are another core component of cognitive warfare systems. These agents can observe, analyze, and participate in online environments, often at scale.
Research has already shown their influence. Bots play a disproportionate role in amplifying misinformation, and research has demonstrated how coordinated networks can shape online discourse.
Volkova described modern agents as autonomous systems with memory and the ability to interact dynamically. “These agents act autonomously,” she said, emphasizing that they are capable of adapting to their environment and to each other. This creates information ecosystems where human and non-human actors are increasingly difficult to distinguish.
That ambiguity complicates both analysis and defense. If bots are embedded within the same networks as humans, and if they behave in similar ways, separating organic behavior from synthetic influence becomes a persistent challenge.
From Simulation to Synthetic Identity
The most consequential implication of these technologies is not just improved analysis or more effective messaging. It is the gradual emergence of systems that begin to resemble cognitive replicas.
Referring to the ability to simulate specific human communication and behavior, she said these capabilities are “not sci-fi” and “they’re very real.” That statement reflects a trajectory already visible in both research and industry.
Looking ahead, Volkova described a future where these systems extend beyond simulation into physical integration, with digital twins that could act on a person’s behalf – functionally “clones” capable of carrying out tasks, including participating in real-world work and interactions.
Digital systems can now approximate writing styles, behavioral patterns, and social interactions with increasing accuracy. Some companies are already building models based on social media data, while platforms like Meta maintain digital presences through memorialized accounts. Academic work on postmortem digital identity has raised concerns about consent, authenticity, and manipulation when AI systems are trained on personal data.
The concern is not that these systems perfectly replicate human minds. It is that they become convincing enough to function as stand-ins. A digital twin may begin as a tool for simulation or training, but it can evolve into something that mimics how a person speaks, reacts, and engages with others.
As these systems become more autonomous and more tightly integrated with real-world data, they begin to operate less like models and more like persistent counterparts. They become agents that can act, interact, and respond in ways that mirror the behavior of the people they represent.
In high-stakes environments, that raises the possibility of maintaining the appearance of presence even when the person is no longer active or even alive. Taken to its logical conclusion, this is not just simulation but functional replication at scale – systems that do not merely model human behavior, but approximate it well enough to stand in for it in real-world interactions.
Where This Leaves Cognitive Warfare
The trajectory of cognitive warfare technologies is clear. Systems are becoming more realistic, more adaptive, and more capable of operating at scale. Over time, this will expand both their utility and their risk.
The most immediate danger is not a sudden leap to perfect digital replicas, but a gradual normalization of systems that blur the line between human and machine. As modeling improves and agents become more autonomous, the distinction between simulation and participation will continue to erode.
That shift raises fundamental questions about identity, consent, and trust. Once systems can model populations, imitate individuals, and persist in digital environments, the core issue is no longer just influence. It is whether these technologies can begin to stand in for people themselves, and what it means when they do.