Let’s face it. Artificial Intelligence (AI) can run through large data sets and provide efficient machine learning outputs related to performance, skills, learning preferences, operational improvement processes, and so much more in a New York minute. With the AI-integration in most sectors today, the military domain is no exception.
The idea is that AI can be leveraged to produce quick analyses and automate task-driven productivity to enable human beings to focus on things that AI is not well-suited for, such as innovative originality, empathy, judgment, and complex decision-making. There is a lot to be said about what AI can and is doing for the Armed Forces and warfighter capability.
These are just several highlights of what AI is doing for the military community.
- Why Veterans are Built for AI: The Shift all Veterans Need to Know
- From Combat-Ready to Career-Ready: AI-Powered Transition for Today’s Forces
- AI Enters the Classroom at the Marine War College
- Artificial Intelligence Comes to the Ranks: The Next Wave of Military Training Tech
I Am No Luddite, But…
The militarization of AI on a global scale means everyone is running fast. To keep up with the pace, we are enacting policies and legal mandates to ensure we maintain accountability and shared oversight. As part of the Defense Innovation Board’s AI ethical principles, the Department of War’s responsibility is to exercise appropriate judgment on AI, minimizing unintended bias, and traceable data sources and documentation of procedures and methodologies.
Among these responsibilities are reliability, safety and well-defined uses, and governable design allowing the deactivation of deployed systems.
The Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023 prohibited autonomous nuclear weapons systems from being enabled by AI without human intervention and control. It specifically cited a report from Human Rights Watch and the International Human Rights Clinic of Harvard Law School noting, “[r]obots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life.”
The United Nations General Assembly adopted its first resolution on Artificial Intelligence in the military domain and its implications for international peace and security in December 2024. In its charter, it stressed the importance of humanitarian and international human rights laws applicable to AI-driven military capabilities.
Looking at military implications, it will continue to be crucial to address how we are mitigating AI failures and limitations, in addition to the obvious compromised systems and conflict escalation risks. We are doing this in tandem with the drive to make quick progress. The White House issued executive orders protecting American AI innovation and securing dominance in the competitive AI race, with the most recent Ensuring a National Policy Framework for Artificial Intelligence and the following action plan:
- Exporting American AI: The Commerce and State Departments will partner with industry to deliver secure, full-stack AI export packages – including hardware, models, software, applications, and standards – to America’s friends and allies around the world.
- Promoting Rapid Buildout of Data Centers: Expediting and modernizing permits for data centers and semiconductor fabs, as well as creating new national initiatives to increase high-demand occupations like electricians and HVAC technicians.
- Enabling Innovation and Adoption: Removing onerous Federal regulations that hinder AI development and deployment, and seek private sector input on rules to remove.
- Upholding Free Speech in Frontier Models: Updating Federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.
President Trump’s administration is committed to a $90 billion investment in AI data center expansion and energy initiatives. The White House also invested in the American manufacturing of advanced semiconductor computer chips via its $200 billion investment with Micron Technology, announced earlier this year in June 2025.
We are living in another transformative revolution with the Age of AI.
When the world shifted to industrialization, we all shared in the unforeseen consequences, regardless of whether you were part of the country that was economically benefiting from the Industrial Revolution or not. These consequences led to significant environmental impacts, widespread health and social issues, and major changes in skilled labor, which also included exploitation. Even more,
Cold War lessons taught us that when we revolutionized military nuclear power in the arms race, we saw the rise of censorship, security clearance requirements, and public restrictions. Today, we find ourselves in the next arms race to be an AI superpower. What lessons will be remembered?
Role of AI Military Training
At the micro-level, what will military training need to look like to avoid overreliance on AI when human problem-solving and moral judgment is needed? AI is progressing faster than our ability to deliver training at scale. For the most part, AI training for servicemembers is focused on what it can do for us and ways to use it as a tool for decisions, simulations, tasks, and customized learning and education.
This includes the launch of the Department of War’s secure GenAI.mil platform this month. It will become increasingly important to not only leverage AI to advance military readiness but also intentional training on AI itself, especially as AI-adaptions to military frameworks are happening in real time.
Training our military personnel should challenge our perceptions and address known AI limitations and risks. In warfare, AI is prone to biases that may lead to unintended targets, inability to correctly interpret new data, and “hallucinations,” which are nonsensical or incorrect outputs to nonexistent data patterns in the AI context. For example, trained AI large language models perpetuate biases, which can then be misused if not monitored carefully.
Training should encourage open discussions about its strengths and weaknesses. In addition to technical safeguards, other potential factors related to biometrics, personally identifiable information exposure, psychological warfare with deepfakes, dissociation or desensitization of human acts of war (due to removing humans out of harm’s way through AI weaponry), fratricide, and security clearance adjudications impacted by cybersecurity warfare or bias are all real-world implications.
There are inadvertent and indirect consequences of AI that include civil-military overlap and role shifts as well. Military service members, like their civilian counterparts, are adapting their roles as AI is used in daily operations. A closer examination of how AI may be “de-skilling” military troops, how military use of AI will be evaluated and audited, and adherence to standards and continuous testing will help ensure the role of AI in the military domain is effectively monitored with human oversight.
Suggestions for AI Regulation and a Path Forward
Last year at the International Conference on Machine Learning, Harvard and MIT researchers presented Position: AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research, which offers “awareness and constructive discussion” on the ethical and societal impacts of AI technology.
A few themes emerge regarding a path forward, which include advocating for centralized oversight, minimizing ethical dilemmas, and addressing geopolitics. AI regulation is a shared responsibility that involves providing direct ethical guidance and international policies over the use of AI applications and building and maintaining strong partnerships with civilian industry and academic experts to help address risks, biases and disparities in knowledge and research.
The White House has made it very clear that America is to lead the AI race with policymaking and billions invested in AI expansion. Specialized semiconductors and computer chip design needed for AI production are already creating geopolitical supply chain constraints and changes. Colleges and universities have launched new semiconductor training courses and programs to meet the growing AI demand.
In the past decades, we saw it coming and are now actively integrating and adapting AI into both civilian and military aspects of life. With so much focus on training AI models and what we want it to do for us, military members would benefit from more AI training with healthy skepticism. Let’s keep the discussions open and ongoing.