Lompat ke konten Lompat ke sidebar Lompat ke footer

AI Firm Anthropic Sues Trump Admin to Remove 'Supply Chain Risk' Label

Legal Battle Over AI Use in National Security

Artificial intelligence company Anthropic is embroiled in a high-stakes legal battle with the Trump administration over its refusal to allow unrestricted military use of its technology. The company has filed lawsuits to challenge what it describes as an "unlawful campaign of retaliation" against its stance on AI applications in warfare and surveillance.

Anthropic requested federal courts to reverse the Pentagon's decision to label the company a "supply chain risk." Additionally, the firm seeks to undo President Donald Trump’s directive for federal employees to stop using its AI chatbot, Claude. This dispute highlights growing tensions around how AI should be regulated and deployed, particularly in sensitive areas like national security.

The legal challenges have intensified an already public conflict, drawing in industry rivals such as OpenAI, which recently entered into a deal with the Pentagon. This move came just hours after the government took action against Anthropic for its position.

Multiple Lawsuits Target Government Actions

Anthropic filed two separate lawsuits on Monday, one in California federal court and another in the federal appeals court in Washington, D.C. Each lawsuit addresses different aspects of the government's actions against the San Francisco-based company. The company claims that these actions are unprecedented and unlawful, arguing that the Constitution does not permit the government to punish a company for its protected speech.

According to the lawsuit, no federal statute authorizes the actions taken by the government. Anthropic is seeking judicial review as a last resort to protect its rights and halt what it calls the Executive’s unlawful campaign of retaliation.

The Defense Department has declined to comment on the matter, citing a policy of not commenting on ongoing litigation. However, the company has emphasized its commitment to restricting its technology from being used for mass surveillance of Americans and fully autonomous weapons.

National Security Concerns and Industry Reactions

Defense Secretary Pete Hegseth and other officials have insisted that Anthropic must accept "all lawful" uses of its AI chatbot, threatening punishment if the company does not comply. They have also condemned the firm and its CEO, Dario Amodei, on social media.

The designation of Anthropic as a supply chain risk cuts off its defense work, a measure intended to prevent foreign adversaries from compromising national security systems. This is the first known instance of the federal government using this designation against a U.S. company. Hegseth stated in a March 4 letter to Anthropic that the designation was "necessary to protect national security," according to the company's lawsuit.

Trump also ordered federal agencies to stop using Claude, though he gave the Pentagon six months to phase out the product, which is deeply integrated into classified military systems, including those used in the Iran war.

Broader Implications for the AI Industry

Anthropic's legal arguments include strong First Amendment and due process claims, with experts noting that the case has escalated beyond expectations. Michael Pastor, a professor at New York Law School, described the situation as unprecedented, highlighting the potential consequences of such actions.

Even as it fights the Pentagon's actions, Anthropic has sought to reassure businesses and government agencies that the supply chain risk designation is narrow and only affects military contractors when using Claude for the Department of Defense. This distinction is crucial for the company, as most of its projected $14 billion in revenue comes from businesses and government agencies using Claude for tasks like computer coding.

Ethical Stance and Market Impact

Anthropic emphasizes its commitment to AI safety and positive outcomes for humanity since its founding in 2021 by Amodei and six former OpenAI employees. Its usage policy prohibits "lethal autonomous warfare without human oversight and surveillance of Americans en masse."

Despite these ethical stances, the company has allowed the military to use Claude in ways that civilians cannot, including military operations and analyzing "lawfully collected foreign intelligence information." Until recently, Anthropic was the only tech company approved to supply its AI model to classified military systems.

The dispute has led the Pentagon to consider shifting some of Claude's work to Google's Gemini, OpenAI's ChatGPT, and Elon Musk's Grok. Anthropic's lawsuit alleges that the Trump administration's actions are damaging its reputation, jeopardizing contracts, and attempting to destroy the economic value of the company.

Public Support and Industry Backing

Conversely, the fight has bolstered Anthropic's reputation among some customers and tech workers who support its refusal to yield to pressure from the Trump administration. CEO Amodei's moral stance was further highlighted when his rival, OpenAI CEO Sam Altman, sought to replace the Pentagon's Claude with ChatGPT, a move Altman later admitted was rushed and opportunistic.

Consumer downloads of Claude surged, lifting its popularity above better-known competitors like ChatGPT and Gemini. The issue of setting guardrails for AI use continues to influence competition for talent in the AI industry. OpenAI's head of robotics, Caitlin Kalinowski, resigned over OpenAI's Pentagon deal, expressing concerns about surveillance and lethal autonomy.

A group of more than 30 leading AI developers at OpenAI and Google, including Google's chief scientist Jeff Dean, filed a legal brief supporting Anthropic. They argued that national security is not served by reckless designations or the suppression of public discourse on AI safety.

Download the
FREE WPXI News app
for breaking news alerts.
Follow Channel 11 News on
Facebook
and
Twitter
. |
Watch WPXI NOW

Posting Komentar untuk "AI Firm Anthropic Sues Trump Admin to Remove 'Supply Chain Risk' Label"