Top NewsVideo

Hegseth Blocks AI Use in Fully Autonomous Weapons Amid Pentagon Standoff

×

Hegseth Blocks AI Use in Fully Autonomous Weapons Amid Pentagon Standoff

Share this article

Hegseth Blocks AI Use in Fully Autonomous Weapons Amid Pentagon Standoff

Defense Secretary Pete Hegseth has moved to block the use of certain artificial intelligence systems in fully autonomous weapons, escalating a high-stakes dispute between the Pentagon and leading AI developer Anthropic. The controversy centers on whether advanced AI models should be permitted to power weapons systems capable of selecting and engaging targets without direct human oversight.

At the heart of the standoff is Anthropic’s refusal to remove built-in ethical safeguards from its AI models, including its flagship system, Claude. The company has long maintained that its technology should not be used to develop fully autonomous weapons or conduct unchecked mass surveillance. Executives argue that current AI systems are not reliable enough to make life-and-death battlefield decisions independently.

According to defense officials, the Pentagon sought broader authorization to deploy AI tools across a wide range of military applications, provided such uses were lawful. However, Anthropic declined to lift its restrictions, citing safety risks and potential violations of international norms surrounding armed conflict.

In response, Hegseth designated the company as a potential supply-chain risk, effectively restricting Pentagon contractors and affiliated defense partners from engaging in commercial activity involving its AI technology. The move follows broader directives from the administration of Donald Trump aimed at tightening oversight of AI partnerships across federal agencies.

The decision has triggered an intense debate over the future of AI in national defense. Supporters of the Pentagon’s position argue that limiting access to cutting-edge artificial intelligence could weaken U.S. military readiness, particularly as global rivals accelerate their own AI-powered defense initiatives. They contend that advanced machine-learning systems are critical for logistics, intelligence analysis, cybersecurity, and potentially battlefield coordination.

On the other side, AI safety advocates warn that fully autonomous weapons—systems capable of identifying and striking targets without meaningful human control—pose profound ethical and security risks. They caution that premature deployment of such technology could increase the likelihood of unintended escalation, civilian casualties, or algorithmic errors in high-pressure combat scenarios.

Anthropic’s leadership has reiterated that it remains open to lawful government partnerships but will not compromise its core AI safety principles. The company emphasizes that maintaining guardrails is essential to prevent misuse and protect both civilians and military personnel. Legal analysts suggest the dispute could lead to court challenges if contractual restrictions or federal designations are contested.

The broader clash reflects a growing global conversation about AI governance, military ethics, and the balance between innovation and accountability. While the Pentagon continues to explore alternative AI providers willing to comply with its terms, the current impasse underscores the complex intersection of technology policy and national security strategy.

As debates over autonomous weapons intensify worldwide, the outcome of this standoff could set a powerful precedent for how the United States regulates artificial intelligence in defense operations. Whether this leads to stricter federal standards, new legislation, or expanded industry collaboration remains to be seen—but the battle over AI’s role in modern warfare is clearly just beginning.

Watch video below :