Published Date : 22/08/2025
Artificial intelligence (AI) promises to transform nearly everything in modern society, including the militaries of countries across the world. And yet, years after large language models started making headlines, many large organizations find themselves with impressive pilot results that refuse to scale. Even more worrisome, the Pentagon is one of the large enterprises caught in this scaling trap. This has been made stickier by a false dichotomy that pits accelerated AI adoption against responsible use. Indeed, for all the AI debates that seesaw from amazing model breakthroughs to headline-popping risks, the discussion often omits a more mundane reality—adoption will determine whether the United States or China's military wins the current AI race.
President Donald Trump’s recent AI Action Plan makes clear that American AI dominance is crucial, stressing the need to escape the trap and expand AI adoption. That is why it is head-scratching to see the Department of Defense (DoD) reinforce a view that AI uses remain research projects. Last week, the Pentagon moved the Chief Digital and Artificial Intelligence Office (CDAO) from a C-suite leadership role to part of DoD’s research and development shop.
The critical next step for the development of AI and the continued defense of U.S. national security interests is adoption at scale. This means maturing organizations like CDAO and avoiding treating AI applications like science experiments. Pentagon officials have told media outlets that “this realignment is the next step in making a uniform, AI-first push” at DoD, and they have promised that it “will not create additional review layers or bureaucratic processes.”
But moving AI leadership into the research and development office risks doing the opposite. It will tell the military that AI isn’t ready for prime time, and it will undoubtedly create new bureaucratic barriers as AI capabilities and policies CDAO is driving will now have to go through the normal, slow administrative and budget process that has only restricted AI adoption in the past.
Today’s AI tools can enable militaries to interpret and act on vast amounts of intelligence and operational data in real time. Improving this capability across the U.S. military would mean faster decision making with greater accuracy, a combination that directly translates into increased lethality. The Pentagon knows this. That is why it refocused its AI deployment by establishing the CDAO in 2021 as an equal of the Research and Engineering Office and Chief Information Office. This move was intended to promote perpetually stalled prototypes, transforming them into adopted AI-enabled tools that could fuel enterprise applications for urgent warfighting needs.
That focus and effort is needed. DoD is a nearly $1 trillion organization with millions of people spread around the world. With immense size and a no-fail mission of national defense, commercial AI tools like OpenAI’s ChatGPT, Anthropic’s Claude, and Google Gemini, when adapted to defense contexts, have tremendous potential to meet warfighter needs and save taxpayer dollars. But despite AI’s promise, DoD remains stuck in experimentation mode in its new contracts with companies like OpenAI and Anthropic—even in areas where there are proven results.
Meanwhile, the pace of adversarial innovation and adoption is accelerating, especially with the recent lifting of export controls on advanced chips. China will subsidize its domestic champions; steal intellectual property, as it has in every technology category for decades; and use U.S. technology to fuel AI adoption by its military and advance cutting-edge AI firms like DeepSeek.
Despite the accelerating pace of Chinese adoption, the Pentagon is still stuck in a “two steps forward, one step back” paradigm that often delays scaling, sometimes due to concerns about military use of AI. This fundamentally misunderstands the guardrails at the Pentagon, including human oversight and accountability frameworks, that have been embraced by the Trump administration and used by DoD personnel to guard against catastrophic risks.
To better understand the issue, consider two use cases. There is a desperate need to reform the procurement processes at the Pentagon. AI can help take time-consuming items, like contract generation, and execute them in hours rather than weeks. It can more effectively predict potential maintenance issues, as it already does for some Air Force platforms, and identify supply chain risks early to increase inventory management resilience.
Indeed, the Pentagon has invested significantly in AI tools to speed up acquisition. Commercial tools like ChatGPT, adapted to defense contexts, have tremendous potential to improve efficiency—saving taxpayer dollars, improving industry engagements, and meeting warfighter needs all at the same time. Treating AI as a technology that needs more research, not scaled adoption, will impede these uses. With a shrinking Pentagon workforce, DoD could end up in the worst of both worlds: Without enough people to operate effectively using the old business model and without the AI tools necessary to implement a new one.
Now consider the use of AI in military operational centers across the globe. These centers are the regional and functional information and decision hubs for everything related to U.S. national defense. They help protect the homeland from missile attacks, monitor their region of responsibility on a daily basis for instability, and, in the case of Indo-Pacific Command, work to prevent (or ensure the U.S. military can win) a potential war with China. For several years, the Pentagon has modernized these operations through a program called Combined Joint All Domain Command and Control (CJADC2). Using an agile development process, which is led by CDAO, warfighters worked directly with technology experts and senior leaders at the Pentagon to find tech solutions that would help make operators faster and more accurate at processing and acting on vast amounts of data. This program went from facing widespread criticism to achieving minimum viable capability in only eighteen months. And because it combined tech and process transformations, it broke through to deploy on a global scale only six months later.
These validated AI tools have the accuracy and transparency required to save lives and deliver precise and reliable effects. But now the Pentagon has—maybe inadvertently due to a desire to consolidate technology initiatives in Research & Engineering—jeopardized this progress, which already itself needed accelerating. Trump’s AI Action plan rightly focused on spurring innovation and removing barriers while appropriately managing risks. The Pentagon’s policy changes won’t fulfill this directive. Let’s hope DoD has another trick up its sleeve to make clear the need to rapidly speed AI adoption for the United States military. Demoting AI’s importance inside the walls of the Pentagon would make American drone and AI dominance far less likely. After all, the real U.S.-China AI race is about adoption. That means it’s time to push the go button—not to slow down.
Q: Why is the Pentagon's recent realignment of its AI office concerning?
A: The realignment moves the Chief Digital and Artificial Intelligence Office (CDAO) from a leadership role to a research and development shop, which may signal that AI isn't ready for prime time and create new bureaucratic barriers, hindering its adoption.
Q: What are the potential benefits of AI in military operations?
A: AI can enable faster decision-making with greater accuracy, improve procurement processes, predict maintenance issues, and enhance supply chain management, all of which can increase military efficiency and effectiveness.
Q: How does China's approach to AI adoption differ from the U.S.?
A: China is accelerating its AI adoption through subsidies, intellectual property theft, and the use of U.S. technology, which could give it a competitive edge in the global AI race.
Q: What specific use cases demonstrate the potential of AI in the U.S. military?
A: AI can be used to speed up contract generation, predict maintenance issues for military platforms, and enhance the capabilities of military operational centers through programs like Combined Joint All Domain Command and Control (CJADC2).
Q: What are the risks of treating AI as a technology that needs more research rather than scaled adoption?
A: Treating AI as a technology that needs more research can delay its implementation, leading to missed opportunities for efficiency gains and potential military advantages, especially as adversaries like China accelerate their AI adoption.