Selected menu has been deleted. Please select the another existing nav menu.
=

TechTonic: US military deploys Anthropic’s Claude AI in Iran strikes despite Trump’s ban

Lorem ipsum dolor sit amet consectetur. Facilisis eu sit commodo sit. Phasellus elit sit sit dolor risus faucibus vel aliquam. Fames mattis.

HTML tutorial

In one of the most striking intersections of artificial intelligence and modern warfare, the US military reportedly deployed Claude, the large-language AI model developed by the San Francisco-based company Anthropic, in its recent attacks on Iran even after President Donald Trump ordered all federal agencies to stop using it.The development has laid bare the deep integration of advanced AI tools into military planning and raised urgent questions about governance, ethics and the future of AI in war.According to reports by The Wall Street Journal and other outlets, the US military incorporated Claude into its operational planning during the massive US-Israel strikes on Iranian targets. The AI system was used for intelligence analysis (assisting commanders in processing and interpreting complex battlefield data), target Identification and running simulations to support strategic decisions about how operations might unfold.As per reports, Claude’s use went beyond mere technical assistance, reflecting how deeply embedded AI tools have become in the US military workflows even as political tensions over those tools escalate.The controversy stems from Trump’s decision, issued just hours before the Iran attack began, to require all federal agencies to immediately cease using Anthropic’s AI technologies. Trump publicly denounced Anthropic as a “radical Left AI company” and criticised its leadership for not aligning with military needs, claiming its reluctance to fully open up technology rights posed national security risks.Despite this directive, the US Central Command and other military bodies continued using Claude in the Iran operation.The reason cited by the command centres was that Claude was already deeply integrated into military intelligence platforms and there was no ready substitute that could be deployed on such short notice. As the Pentagon itself acknowledged, detaching from a widely embedded technology could not happen overnight.Anthropic’s Claude had become one of the few AI systems cleared for use within classified US military networks, allowing it to handle sensitive intelligence data. Its integration involved partnerships with defence-oriented data systems and cloud infrastructure, which made it useful for real-world operational tasks.This integration had begun well before the Iran conflict, including deployment in operations such as the mission to capture Venezuelan President Nicolás Maduro. Anthropic later objected that such use violated its terms, which explicitly bar the deployment of Claude for lethal autonomous weapons or surveillance without human oversight.

HTML tutorial

Tags :

Search

Popular Posts


Useful Links

Selected menu has been deleted. Please select the another existing nav menu.

Recent Posts

©2025 – All Right Reserved. Designed and Developed by JATTVIBE.