Selected menu has been deleted. Please select the another existing nav menu.
=

AI tech firm Anthropic sues over blacklisting by Pentagon | US News

Lorem ipsum dolor sit amet consectetur. Facilisis eu sit commodo sit. Phasellus elit sit sit dolor risus faucibus vel aliquam. Fames mattis.

HTML tutorial

Anthropic, which owns the AI assistant Claude, is suing the Trump administration after what it called an “unprecedented and unlawful” decision to blacklist the firm on national security grounds.The Pentagon designated the artificial intelligence company a “supply chain risk” on Thursday over its refusal to allow unrestricted military use of its technology.
It has been involved in an unusually public dispute over how Anthropic’s AI chatbot Claude could be used in warfare.Anthropic responded on Monday by filing two separate lawsuits, one in California federal court and another in the federal appeals court in Washington DC each challenging different aspects of the Pentagon’s actions against the company.

Share

Why did Pentagon threaten AI company?

“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorises the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”The defence department declined to respond, saying its policy is not to comment on ongoing litigation.Anthropic, whose financial backers include Alphabet’s Google and Amazon, has insisted on restricting its technology from being used for mass surveillance of Americans and fully autonomous weapons.US defence secretary Pete Hegseth had threatened to punish Anthropic if it did not accept “all lawful uses” of Claude.
Donald Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to stop using the AI assistant, which is deeply embedded in classified military systems, including those used in the Iran war.Designating Anthropic a supply chain risk would cut off its defence work by using powers designed to prevent foreign adversaries from harming national security systems.It is the first time the federal government is known to have used the designation against a US company.Read more from Jattvibe:Trump’s furious response to AnthropicAnthropic’s model is scaring lawyersAI willing to ‘go nuclear’ in wargames
Anthropic, which has been recently valued at $380bn (£284bn), has attempted to convince businesses and other government agencies that the Trump administration’s penalty is narrow, and only affects military contractors when they are using Claude for defence work.Most of its projected $14bn (£10.5bn) in revenue this year comes from businesses and government agencies, which are using Claude for computer coding and other tasks.

The defence department signed ​agreements worth up to $200m each with major AI labs in the past year, includingAnthropic, OpenAI and Google.Microsoft-backed OpenAI announced a ​deal with the US military to use its technology, shortly after Mr Hegseth moved to blacklist Anthropic.

HTML tutorial
Tags :

Search

Popular Posts


Useful Links

Selected menu has been deleted. Please select the another existing nav menu.

Recent Posts

©2025 – All Right Reserved. Designed and Developed by JATTVIBE.