AI warfront

Graphic By Addison Watts

March 4, 2026 | Hannah Field | Editor-in-Chief

On Jan. 9, 2026, the Secretary of War, Pete Hegseth, published a memorandum regarding the usage of artificial intelligence by the American government, stating that AI dominance will redefine military affairs over the next decade, and that integration of AI will make America more lethal and efficient. “I direct the Department of War to accelerate America’s Military AI Dominance by becoming an ‘AI-first’ warfighting force across all components, from front to back,” Hegseth said.

The document lays out that AI will ideally play a role in warfighting by incorporating AI-enabled battle management and decision support, “from campaign planning to kill chain execution.”

The Pentagon aimed to contract with Anthropic, an AI research company that oversees the model Claude, considered one of the best available systems. Rivaling ChatGPT, which is owned by OpenAI, Anthropic designates itself as “a public benefit corporation dedicated to securing its benefits and mitigating the risks” as AI rises in popularity. Claude is well known for nearly flawless navigation of coding, UI design and intelligent writing — outdoing ChatGPT — and is considered a safe model, claiming to follow AI responsibility codes of conduct. It is considered the most capable model for sensitive and intelligence work on behalf of government procedures, explaining the strong desire for the Pentagon to utilize it without barriers.

More specifically, the Department of War desired for Anthropic to forgo safety and security guardrails within Claude for the purpose of unrestricted military usage.

As of Feb. 27, however, the current presidential administration decided against integrating with Claude, following Anthropic’s refusal to grant access out of concern for how the AI systems would be used for domestic surveillance and as potential weapons of war.

Anthropic CEO Dario Amodei said in a statement Feb. 26: “They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’ … and to invoke the Defense Production Act to force the safeguards’ removal … Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

Amodei went on to state that it is in Anthropic’s best interest to serve the Department of War and that they are ready to support the United States, just outside of the two narrow exceptions listed.

Secretary of War Pete Hegseth posted to X: “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”

Hegseth went on to direct that Anthropic be designated as a supply-chain risk, thereby condemning business with Anthropic, and stated that the Department of War would be transitioning to a “more patriotic service” within six months.

Donald Trump announced on Truth Social the same day that Anthropic’s “selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

Trump also threatened to force Anthropic to comply with criminal consequences.

Anthropic responded, “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

Historically, Claude was the first frontier AI model to be deployed in U.S. government networks, capable of expanding medical research, foreign intelligence analysis, combating human trafficking and more. Now, however, it is virtually blacklisted, and government-associated companies are unable to work with Anthropic. This tactic has only ever previously been applied to foreign companies such as Huawei, a Chinese technology company banned for national security concerns.

On Feb. 28, OpenAI released a statement titled “Our agreement with the Department of War,” which indicated that OpenAI is also not allowing the Department of War to utilize their AI models for domestic surveillance, autonomous weapons systems or high-stakes automated decisions. They claimed to have a more expansive approach with more safeguards than what Anthropic offered, and that the Department of War was clear that domestic surveillance was not one of their considerations for adopting OpenAI.

Elon Musk’s AI model, Grok, was incorporated into government procedure in January, operating inside the Pentagon network. Around the same time, controversy arose in Grok’s functionality as users found it capable of generating highly sexualized pornographic images without the consent of the people pictured, as well as reiterating antisemitic and racist rhetoric. Grok was created to be the opposite of “woke AI,” Musk’s words that likely target Claude and ChatGPT.

In terms of a timeline, the clash between Anthropic and the Pentagon came to be following the United States’ attack on Venezuela, which captured President Nicolás Maduro. Anthropic reached out to Palantir — a controversial data integration and analytics software company run by Peter Thiel — to inquire about Claude’s role in the attack on Venezuela. Palantir confirmed that Anthropic tech was used alongside Palantir’s Maven Smart System technology, the latter of which is largely utilized by the Department of War as a tool for military logistics planning and targeting.

Beyond their political impact, AI systems are known to be highly intensive on the environment, requiring immense water consumption to stay powered. This kind of automated technology draws heavily from the power grid, relying on fossil fuels to run. Billions of dollars have been spent to construct massive data centers to serve demand, raising fears of higher electricity bills in rural and urban areas near new plants.

The future of AI involvement in war activity is currently unknown, but the Department of War  outlined their plans in early 2026 very clearly, and has since emphasized a legal usage of AI systems, despite Anthropic and OpenAI’s concerns.

 

Contact the author at howleditorinchief@wou.edu