In what may be the most significant confirmed use of artificial intelligence in a live military operation, Axios reported today that the Pentagon used Anthropic’s Claude AI during the January 3rd raid that captured Venezuelan President Nicolás Maduro — not just in the planning stages, but during the active operation itself.
The revelation comes at a pivotal moment for AI in national security, as the Department of Defense pushes aggressively to deploy frontier AI models across classified networks with fewer restrictions than ever before.
According to sources cited by Axios, Anthropic’s Claude was used during the active phase of Operation Absolute Resolve — the predawn raid on January 3, 2026 that saw U.S. special operations forces, including Delta Force and the 160th Special Operations Aviation Regiment, capture Maduro and his wife Cilia Flores from their compound in Caracas.
The specific details of how Claude was employed remain classified, but the distinction that it was used during the operation — not merely in pre-mission planning — marks a significant milestone. This suggests AI played a real-time role in what was one of the most complex U.S. military operations in recent history.
The operation involved a massive multi-domain effort spanning the U.S. Army, Navy, Marines, Air Force, Cyber Command, Space Command, the CIA, DEA, and FBI. Seven U.S. soldiers were injured, but no Americans were killed. Venezuelan and Cuban forces suffered dozens of casualties.
The Venezuela revelation doesn’t come out of nowhere. Reuters reported this week that Claude is already available in classified settings through third-party integrations, and an Anthropic spokesperson confirmed that “Claude is already extensively used for national security missions by the U.S. government.”
Unlike OpenAI’s ChatGPT, which just became available on the Pentagon’s unclassified genai.mil network, Claude is cleared for top secret use cases. That distinction likely made it the tool of choice for an operation as sensitive as capturing a sitting head of state.
This story sits within a much larger shift happening right now between the Pentagon and Silicon Valley. Semafor reported on February 11th that the Department of Defense is demanding AI companies provide their tools for “all lawful uses” — meaning no company-imposed restrictions on how models are deployed.
Here’s where it gets interesting:
Despite the tension, Anthropic appears to be walking a careful line — maintaining safety principles while still supporting national security missions. The company’s statement to Reuters via the Times of India emphasized being “committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats.”
The confirmed use of Claude in a live combat operation crosses a threshold the defense and intelligence communities have been approaching for years. Here’s why it matters:
AI models capable of synthesizing intelligence, analyzing patterns, and supporting commanders in real-time are no longer theoretical. If Claude was used during the active phase of a multi-domain operation involving special forces, air strikes, and cyber operations, it suggests the technology has reached a level of reliability the military is willing to bet on in high-stakes scenarios.
AI researchers have warned that these models can generate plausible-sounding but incorrect information — a risk that carries life-or-death consequences in classified military settings. The tension between the Pentagon’s desire for unrestricted access and AI companies’ safety guardrails will only intensify now that we have a confirmed operational use case.
With multiple AI companies now competing for Pentagon contracts across classification levels, we’re entering an era where AI capability on classified networks may become as important as traditional defense technology. The companies that can operate in these environments — with the trust of both military leaders and their own safety teams — will have a significant strategic advantage.
Operation Absolute Resolve was already historic — the first U.S. military operation to capture a sitting foreign leader since the Panama invasion in 1989. The revelation that AI played an active role during the operation adds another layer to its significance.
As AI becomes embedded in military operations, the questions of oversight, reliability, and ethical boundaries will only grow more urgent. The defense AI community — and the broader public — will be watching closely to see how this new chapter unfolds.
Related Episode: Want to see how AI is already reshaping geopolitical intelligence? Check out our latest episode — This A.I. Predicts Geopolitical Unrest — where we dive into how AI tools are being used to forecast global threats before they happen.
For more on AI, defense tech, and intelligence — subscribe to The NDS Show on YouTube.