This Wall Street Journal report investigates allegations that Anthropic’s AI model, Claude, was used in JSOC’s operation to capture former Venezuelan President Nicolás Maduro, revealing that frontier AI tools are rapidly gaining traction within the Pentagon. The mission reportedly included AI-enabled targeting that helped with bombing multiple sites in Caracas, even though Anthropic’s usage guidelines prohibit Claude from facilitating violence, developing weapons, or conducting surveillance.
Anthropic said it “cannot comment” on any specific operation and stressed that any use must comply with its Usage Policies. The Defense Department also declined to comment. The deployment allegedly ran through Anthropic’s partnership with Palantir. The WSJ report also discusses the tensions around Anthropic’s $200M Pentagon contract and a broader push for AI models that “won’t allow you to fight wars,” even as Anthropic’s CEO continues to call for stronger guardrails on lethal autonomy and domestic surveillance.
Readers looking to understand what AI-enabled operations may mean for future conflict should also check out Artificial Intelligence and a Reconfiguration of Military Power, as well as Accelerating Decision-Making: Integrating Artificial Intelligence into the Modern Wargame. Together, they help contextualize how the Maduro raid unfolded reflects a broader shift toward AI-enabled decision superiority, foreshadowing the emergence of AI-assisted decapitation strikes as a feature of future warfare.
The post Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid appeared first on Small Wars Journal by Arizona State University.
Leave a comment