
If you thought the federal government’s reach into your daily life couldn’t get much broader, wait until you see who’s about to decide which artificial intelligence you’re allowed to use—and why.
Story Snapshot
- Senators Josh Hawley and Richard Blumenthal introduced a bill that would require federal pre-approval for advanced AI systems before they can be sold or used in the U.S.
- The Department of Energy—not a tech regulator—would oversee this process, putting energy officials in charge of policing Silicon Valley’s most advanced algorithms.
- The bill is framed as a national security necessity, but critics call it a sweeping power grab that risks stifling innovation and creating new vulnerabilities.
- Industry groups and civil liberties advocates are raising alarms about government overreach, compliance costs, and the potential for bureaucratic gridlock.
- This is the first major bipartisan effort to impose mandatory, pre-market approval on a software-driven industry—a move with no real precedent in U.S. history.
How Washington Plans to Take the Wheel on AI
Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have teamed up to propose the Artificial Intelligence Risk Evaluation Act of 2025, a law that would require any “advanced” AI system to pass a federal safety review before it can legally be sold or used in interstate or foreign commerce. The Department of Energy, an agency more accustomed to managing nuclear reactors than neural networks, would run the show—evaluating risks, setting standards, and enforcing compliance with the threat of severe penalties for violators. The senators argue this is essential to prevent catastrophic AI failures, but the bill’s critics see a different kind of catastrophe: an unprecedented centralization of power in the executive branch, with a single agency handed sweeping authority to decide what counts as “safe” AI.
Congress can't allow American jobs and national security to take a back seat to AI.
I’m introducing legislation to ensure AI works for Americans, not the other way aroundhttps://t.co/10DnvYVUtS
— Josh Hawley (@HawleyMO) September 29, 2025
Why the Department of Energy—And Why Now?
The choice of the DOE as regulator is a head-scratcher for many in the tech world, where oversight has traditionally fallen to the Federal Trade Commission or the National Institute of Standards and Technology. Hawley and Blumenthal claim the DOE’s experience with high-risk systems—like nuclear energy—makes it uniquely qualified to assess existential threats from AI. But this logic has raised eyebrows, even among national security experts, who question whether a department focused on power plants and physics labs is equipped to understand, much less regulate, the fast-moving world of machine learning and generative AI. Skeptics worry this move could create a regulatory blind spot, leaving the U.S. vulnerable to both bureaucratic inertia and adversarial exploitation.
What’s Actually in the Bill—And Who Stands to Gain or Lose
The bill mandates that developers of advanced AI systems submit their models for federal review before deployment, with the DOE empowered to approve, reject, or demand changes. Companies that bypass this process would face stiff penalties, including potential bans on their products. Supporters, including some policy advocates, argue this creates much-needed transparency and accountability, especially as AI systems grow more powerful and opaque. But the tech industry warns that compliance costs and delays could drive innovation overseas, handing a strategic advantage to China and other global competitors. Civil liberties groups, meanwhile, are torn—some applaud the focus on safety and oversight, while others fear mission creep, with the government gaining new powers to surveil and control the digital tools Americans use every day.
The Real Stakes: Innovation, Security, and Who Gets to Decide
At its core, this debate is about who gets to shape the future of AI—and how much power the federal government should have over a sector that has thrived on open competition and rapid iteration. Hawley and Blumenthal insist their approach is the only way to protect national security, civil liberties, and American workers from the risks of runaway AI. But their critics counter that heavy-handed regulation could backfire, slowing the pace of discovery, driving talent and investment abroad, and ultimately making the U.S. less secure. The bill’s supporters point to bipartisan concern over existential AI risks; its detractors see a slippery slope toward pre-market approval for all emerging technologies, with unelected bureaucrats picking winners and losers in the innovation economy.
What Happens Next—And Why You Should Care
The Artificial Intelligence Risk Evaluation Act is now in committee, with hearings and markups expected in the coming weeks. The DOE is quietly preparing for its potential new role, while industry groups and advocacy organizations ramp up their lobbying efforts on both sides. If the bill becomes law, it will set a precedent for how the U.S. governs not just AI, but any technology deemed “high-risk” by Washington. For Americans who value both security and freedom, the question isn’t just whether AI needs guardrails—it’s who gets to build them, and who gets locked out.
Expert Perspectives: Divided on Risk, United on Skepticism
Policy advocates like Americans for Responsible Innovation praise the bill for imposing “transparency, accountability, and guardrails” on AI developers. But many in the tech industry warn that regulatory overreach could stifle the very innovation that has kept the U.S. ahead in the global AI race. Legal and academic experts are divided, too—some see improved safety and oversight, while others fear bureaucratic bloat and constitutional overreach. Civil liberties groups are cautiously supportive of the bill’s transparency provisions but remain wary of expanded government surveillance powers. Across the board, there’s agreement that the stakes are high, but little consensus on whether this bill is the right tool for the job.
Sources:
Official Senate press release (Hawley)
Americans for Responsible Innovation












