AI Firm Anthropic Sues Trump Admin to Overturn Supply Chain Risk Label

Artificial intelligence company Anthropic is taking legal action to challenge the Trump administration's efforts to enforce what it describes as an "unlawful campaign of retaliation" due to its refusal to allow unrestricted military use of its technology. The company has filed lawsuits in both California and Washington, D.C., aiming to reverse the Pentagon’s decision to label it a “supply chain risk” and to stop President Donald Trump's directive that federal employees cease using its AI chatbot, Claude.
This legal battle highlights a growing public dispute over the ethical implications of AI in warfare and surveillance. It has also drawn in other tech industry players, notably OpenAI, which recently secured a deal with the Pentagon just hours after the government targeted Anthropic for its stance.

Anthropic initiated two separate lawsuits on Monday, one in California and another in the federal appeals court in Washington, D.C. Each lawsuit addresses different aspects of the government’s actions against the San Francisco-based company. In its filing, Anthropic asserts that these actions are “unprecedented and unlawful.” The company argues that the Constitution does not permit the government to use its power to retaliate against a company for its protected speech and that no federal statute authorizes the actions taken.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation,” the lawsuit states.

The Defense Department has not commented on the matter, citing a policy of not discussing ongoing litigation. However, Anthropic claims it sought to prevent its technology from being used for mass surveillance of Americans and fully autonomous weapons. High-ranking officials, including Defense Secretary Pete Hegseth, have publicly demanded that the company accept “all lawful” uses of Claude, threatened punishment if Anthropic did not comply, and criticized the firm and its CEO, Dario Amodei, on social media.
Designating the company a supply chain risk has effectively cut off its defense work, as the designation was intended to prevent foreign adversaries from compromising national security systems. This marks the first known instance of the federal government applying such a designation to a U.S. company. Hegseth stated in a March 4 letter to Anthropic that the move was “necessary to protect national security,” according to the company’s lawsuit.
Trump also ordered federal agencies to stop using Claude, though he granted the Pentagon six months to phase out the product, which is deeply integrated into classified military systems, including those used in the Iran war. Anthropic's lawsuit also names other federal agencies, including the departments of Treasury and State, after officials ordered employees to stop using Claude.
Michael Pastor, a professor at New York Law School, described the case as “escalated beyond comprehension.” He noted that it is unprecedented for the government to threaten a company with destruction over a disagreement. “I’ve never seen a case like this,” Pastor said. “It would never have struck our minds that, when we were having difficulty in a negotiation, we would threaten the company essentially with destruction.”
Even as it fights the Pentagon’s actions, Anthropic is trying to reassure businesses and other government agencies that the supply chain risk designation is limited to military contractors using Claude for the Department of Defense. This distinction is crucial for the privately held company, as most of its projected $14 billion in revenue comes from businesses and government agencies using Claude for tasks like computer coding.
More than 500 customers pay Anthropic at least $1 million annually for Claude, according to a recent investment announcement valuing the company at $380 billion. In a statement, Anthropic emphasized that seeking judicial review does not change its commitment to using AI to protect national security but is necessary to safeguard its business, customers, and partners.
The company’s mission centers on AI safety and positive outcomes for humanity, as outlined in its founding in 2021 by Amodei and six former OpenAI employees. Its usage policy explicitly prohibits “lethal autonomous warfare without human oversight and surveillance of Americans en masse.” Anthropic claims it has never tested Claude on these applications and lacks confidence in its reliability or safety for such purposes.
At the same time, the company has allowed the military to use Claude in ways unavailable to civilians, including military operations and analyzing “lawfully collected foreign intelligence information.” Until recently, Anthropic was the only tech peer approved to supply its AI model to classified military systems. The dispute has prompted the Pentagon to consider shifting some of Claude’s work to Google’s Gemini, OpenAI’s ChatGPT, and Elon Musk’s Grok.
Anthropic’s lawsuit alleges that the Trump administration’s actions are damaging its reputation, jeopardizing hundreds of millions of dollars in contracts, and attempting to “destroy the economic value created by one of the world’s fastest-growing private companies.” Meanwhile, the conflict has bolstered Anthropic’s reputation among some customers and tech workers who support its refusal to yield to pressure from the Trump administration.
Amodei’s moral stance was further highlighted when his rival, OpenAI CEO Sam Altman, sought to replace the Pentagon’s Claude with ChatGPT, a move Altman later admitted was rushed and seemed opportunistic. Consumer downloads of Claude surged, temporarily surpassing more well-known models like ChatGPT and Gemini.
The debate over how companies set guardrails for AI continues to influence competition for talent in the AI industry. OpenAI’s head of robotics, Caitlin Kalinowski, resigned over OpenAI’s Pentagon deal, expressing concerns about surveillance of Americans without judicial oversight and lethal autonomy without human authorization.
Another group of over 30 leading AI developers at OpenAI and Google, including Google’s chief scientist and AI research division head Jeff Dean, filed a legal brief supporting Anthropic. They argued that national security is not served by reckless designations of American technology partners as a “supply chain risk” or by suppressing public discourse on AI safety.
Posting Komentar untuk "AI Firm Anthropic Sues Trump Admin to Overturn Supply Chain Risk Label"
Please Leave a wise comment, Thank you