A group of nearly 150 retired judges has come out in support of Anthropic, raising serious concerns over the US government’s decision to label the firm a “supply chain risk.”
Judges Challenge Government Decision
The judges, appointed by both political parties, filed a legal brief arguing that the designation by the US Department of Defense:
- Misinterpreted existing laws
- Failed to follow proper procedures
- Could set a troubling precedent for government overreach
They emphasized that Anthropic is not seeking government contracts but simply wants to avoid being unfairly penalized.
What Triggered the Dispute
The conflict began after disagreements between Anthropic and the government over the use of its AI systems:
- The Pentagon wanted broader use of AI tools in classified systems
- Anthropic refused to allow use in:
- Autonomous weapons
- Mass surveillance of US citizens
Following this, the company was labeled a “supply chain risk”—a classification typically reserved for foreign-linked entities.
Unprecedented Move
This designation is unusual because:
- It has rarely been applied to US-based companies
- It restricts how government contractors can use the company’s technology
- It can impact business relationships across the defense ecosystem
Additionally, the administration ordered federal agencies to stop using Anthropic’s AI systems.
Financial and Legal Impact
Anthropic says the decision could have major consequences:
- Potential loss of hundreds of millions in revenue
- Damage to partnerships with defense contractors
- Long-term reputational impact
CEO Dario Amodei stated the company had no choice but to challenge the move in court.
Broader Industry Concerns
The case has drawn attention from across the tech and policy world:
- Industry groups and tech companies have voiced support
- Ethics experts warn about pressure on companies to compromise values
- Questions are being raised about balancing national security and corporate ethics