Lompat ke konten Lompat ke sidebar Lompat ke footer

Anthropic Sues Trump Admin Over Supply Chain Risk Designation

The Legal Battle Between Anthropic and the U.S. Government

Anthropic, a leading artificial intelligence company, has taken legal action against the Department of Defense and other federal agencies over the Trump administration’s decision to label the company as a “supply chain risk.” This move has sparked a significant conflict between the Pentagon and one of the world's most prominent AI firms, highlighting the growing tension around the use of AI in national security.

The supply chain risk designation is typically reserved for companies linked to foreign adversaries. This classification significantly impacts how Anthropic can conduct business with entities working alongside the Defense Department. The company argues that this designation is legally unfounded and that the Trump administration's directive to cease using its technology is both unprecedented and unlawful.

In a statement, an Anthropic spokesperson emphasized the company's commitment to using AI to protect national security while also asserting the necessity of legal action to safeguard their business, customers, and partners. "We will continue to pursue every path toward resolution, including dialogue with the government," the spokesperson added.

Despite the ongoing legal battle, the Pentagon has chosen not to comment on the litigation, citing department policy. Meanwhile, White House spokesperson Liz Huston defended the administration's stance, stating that the president would not allow any "radical left, woke company" to dictate military operations. She emphasized that the Trump administration is ensuring that the military has the necessary tools to succeed without being influenced by ideological whims.

The Root of the Conflict

The Pentagon issued the supply chain risk designation after negotiations with Anthropic broke down over two key issues: the company's desire for restrictions on the use of its AI tool for mass surveillance of U.S. citizens and for autonomous weapons. While the Pentagon claims it is not interested in these applications, it insists on using Anthropic's AI for "all lawful purposes."

This disagreement led to the Trump administration ordering federal agencies and military contractors to halt business with Anthropic on February 27. Defense Secretary Pete Hegseth stated that no contractor, supplier, or partner could engage in commercial activities with Anthropic.

First Amendment Concerns

In its legal filing, Anthropic alleges that the government is retaliating against the company for its First Amendment-protected speech. The company also argues that the Trump administration lacks the authority to direct federal agencies to stop using its technology and that it was not given adequate due process.

Anthropic is seeking injunctive relief, claiming that current and future contracts with private parties are at risk, potentially jeopardizing hundreds of millions of dollars. The company's CEO, Dario Amodei, noted that the formal letter designating it a supply chain risk indicates that its customers will only be restricted from using Claude in work directly related to their Pentagon contracts.

Support from Competitors

Dozens of scientists and researchers from OpenAI and Google DeepMind have filed an amicus brief in support of Anthropic, arguing that the supply chain risk designation could harm U.S. competitiveness in the AI industry and hinder public discussions about the risks and benefits of AI. They also highlighted that Anthropic's red lines raise legitimate concerns.

Amodei met with Hegseth on February 24, but no agreement was reached. In a blog post, Amodei explained that AI cannot currently be used reliably and safely for mass surveillance and autonomous weapons. He also mentioned that the company has been having productive conversations with the Pentagon about working together while adhering to its redlines.

A Rising Profile

Despite the conflict, Anthropic's profile has risen significantly. Its Claude AI app surpassed OpenAI's ChatGPT in the iPhone's App Store for the first time after the Pentagon terminated its contract with Anthropic. The company also reported that over a million people sign up for Claude daily.

The situation underscores the complex relationship between AI companies and the government, raising important questions about the ethical use of AI and the balance between innovation and national security. As the legal battle continues, the outcome could set a precedent for how AI is regulated and utilized in the future.

Posting Komentar untuk "Anthropic Sues Trump Admin Over Supply Chain Risk Designation"