Lompat ke konten Lompat ke sidebar Lompat ke footer

Anthropic Sues Trump Admin Over Pentagon Ban

A Legal Battle Over AI Ethics and National Security

Anthropic, a leading artificial intelligence company, has taken legal action against the Trump administration, claiming that the U.S. government retaliated against it for refusing to allow its Claude AI model to be used in autonomous lethal warfare and mass surveillance of Americans. This lawsuit marks a significant moment in the ongoing debate about the ethical use of AI technology.

In a 48-page complaint filed in federal court in San Francisco, Anthropic is seeking to have its designation as a national security supply-chain risk declared unlawful and blocked. The company argues that this label not only restricts the use of its technology by the Pentagon but also imposes stringent requirements on defense vendors and contractors who must certify that they do not use Anthropic's models in their work with the department.

A Commitment to Ethical AI

Anthropic was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible." The company claims that the federal government's actions are a direct response to its commitment to these principles.

This case is particularly notable because Anthropic is the first U.S. company to be publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. The consequences of this case are described as enormous, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies."

The Dispute Escalates

The dispute erupted after Anthropic frustrated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. In response, President Donald Trump ordered every federal agency to cease all use of Anthropic's technology. Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security," effectively barring military contractors, suppliers, or partners from conducting any commercial activity with the company.

This decision came just days before a U.S. military strike on Iran. Claude, the AI model in question, is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems.

Legal Arguments and Industry Support

In its lawsuit, Anthropic argues that the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment. The company asserts that "the Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, have supported Anthropic in an amicus brief filed with the court. These professionals argue that today's frontier AI systems present risks when deployed for domestic mass surveillance or autonomous lethal weapons systems without human oversight. They emphasize the need for guardrails, whether through technical safeguards or usage restrictions.

Current AI models are not reliable enough to be trusted with making lethal targeting decisions, and the integration of powerful AI with extensive data about individuals threatens to change the fabric of public life in the country, according to the filing.

A New Era in AI Development

Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race. As the legal battle unfolds, it raises critical questions about the balance between national security and the ethical development of AI technologies.

This case could set a precedent for how the U.S. government interacts with AI companies and the extent to which it can regulate the use of advanced AI systems. The outcome may have far-reaching implications for the future of AI innovation and its role in society.

Posting Komentar untuk "Anthropic Sues Trump Admin Over Pentagon Ban"