OpenAI has announced revisions to its recent agreement with the United States Department of Defense — originally signed to deploy its advanced AI models on a classified military network — after intense backlash from users, employees, and privacy advocates. The move follows growing concern that powerful AI tools could be used in ways that violate civil liberties or ethical standards.
The deal was first disclosed shortly after a high-profile breakup between the U.S. government and rival AI company Anthropic, which had refused to negotiate on terms that would allow its AI to be used for mass surveillance or autonomous weapons. In response, President Donald Trump’s administration labeled Anthropic a “supply chain risk,” paving the way for OpenAI to step in.
Backlash Sparks Revisions
Shortly after OpenAI announced the partnership, widespread criticism erupted from multiple fronts:
-
Users on social media launched a “QuitGPT” boycott movement and canceled subscriptions, contributing to a surge in App Store uninstalls and a rise in rival AI downloads.
-
Internal dissent grew as employees publicly criticized the military deal, with companies like Google joining OpenAI staff in calling for ethical safeguards.
-
Privacy advocates, legal experts, and tech commentators expressed alarm that the AI could be used in mass surveillance operations or for autonomous military decision-making.
Faced with this backlash, CEO Sam Altman publicly acknowledged that the original contract rollout “looked opportunistic and sloppy” and pledged to revise the agreement with clearer protections and ethical boundaries.
What Changed in the Deal
In response to the criticism:
-
OpenAI added explicit language to prohibit the intentional use of its AI systems for domestic surveillance of U.S. persons and nationals.
-
The company said the revised contract also clarifies that intelligence agencies like the National Security Agency cannot use its AI without separate approvals.
-
OpenAI insists the changes strengthen “guardrails” around how its models are deployed on classified networks and reduce ethical concerns about misuse.
Altman called the situation a learning opportunity and said OpenAI plans to work with government partners to ensure these safeguards are enforceable and aligned with U.S. laws on privacy and civil liberties.
Why This Matters
The revision reflects the growing tension between government demand for advanced AI tools and public concerns over privacy and ethics. As AI becomes deeper integrated into national security systems, companies like OpenAI are being pressured to balance innovation with accountability.
Critics say the episode highlights how public opinion and internal pushback can influence corporate policy, especially on issues like mass surveillance, autonomous weapons, and AI in military operations. Whether the revised contract fully satisfies critics remains debated, but the backlash has already influenced OpenAI’s approach and could shape future AI-government collaborations.
Watch video below :





