«

美国即将打响人工智能监管之战

qimuai 发布于 阅读:24 一手编译


美国即将打响人工智能监管之战

内容来源:https://www.technologyreview.com/2026/01/23/1131559/americas-coming-war-over-ai-regulation/

内容总结:

美国AI监管之争:联邦与州政府的角力将如何塑造人工智能未来?

2025年末,美国围绕人工智能监管的博弈进入白热化阶段。由于国会两次未能通过禁止各州自行制定AI法律的提案,时任总统特朗普于12月11日签署一项全面行政命令,旨在限制各州对蓬勃发展的AI产业进行监管。该命令要求司法部成立专门工作组,起诉那些AI法规与联邦“轻触式监管”理念冲突的州,并授权商务部对制定“繁重”AI法规的州削减联邦宽带资金。此举被视为科技巨头的阶段性胜利,他们长期斥巨资游说,反对各州法规“碎片化”阻碍创新。

然而,多个州并未退缩。纽约州和加州率先行动,相继签署了《负责任AI安全与教育法案》及全美首部前沿AI安全法,要求AI公司公开安全协议并报告重大安全事件。尽管法规版本在行业游说后有所妥协,但仍标志着科技公司与安全倡导者之间的脆弱平衡。

2026年,战场将转向法庭。若联邦政府起诉州法规,加州、纽约等民主党州可能发起法律挑战。科罗拉多大学法学院教授玛戈特·卡明斯基指出,特朗普政府试图通过行政命令抢占立法先机“如履薄冰”。而依赖联邦宽带资金的共和党州可能选择妥协,避免与联邦对抗。

与此同时,国会两党分歧使联邦AI立法陷入僵局。前民主党众议员布拉德·卡森指出,行政命令加剧了党派对立,使“负责任AI政策更难通过”。尽管特朗普承诺推动联邦AI政策,但国会年内通过法案的可能性极低。

在立法真空下,公众对AI监管的呼声日益高涨。2025年各州提出超千项AI相关法案,其中超百项成为法律。儿童安全或将成为跨党派共识突破口:谷歌、Character.AI等公司已因聊天机器人关联青少年自杀案件被诉,加州正推动《家长与儿童AI安全法案》,要求AI公司进行年龄验证与独立安全审计,可能成为全国范本。

此外,数据中心资源消耗与AI就业冲击也将催生新规。多州拟要求数据中心报告能耗水耗并自负电费,劳工团体可能推动特定行业禁用AI。

科技巨头与安全倡导者的政治博弈同样激烈。由OpenAI总裁等资助的超级政治行动委员会“引领未来”将支持主张放任AI发展的候选人,而前议员组建的团体则资助支持监管的竞选人。这场拉锯战可能催生以“反AI民粹主义”为纲领的候选人。

2026年,美国AI监管将继续在联邦与州的角力、法庭诉讼与公众焦虑中艰难推进。各州制定的规则,不仅将决定美国AI发展路径,更可能影响全球对这一颠覆性技术的治理方向。

中文翻译:

美国即将爆发的AI监管之战
2026年,各州将与白宫的全面行政令正面交锋

2025年末的最后几周,美国关于人工智能监管的争论达到白热化。12月11日,在国会两度未能通过禁止各州制定AI法律的提案后,总统唐纳德·特朗普签署了一项全面行政令,试图限制各州对这个蓬勃发展的行业进行监管。他转而承诺与国会合作制定"负担最小化"的全国性AI政策,以助力美国赢得全球AI竞赛。这一举措标志着科技巨头的阶段性胜利——这些企业已筹集数千万美元资金对抗AI监管,声称各州零散的法规会扼杀创新。

2026年,战场将转向法庭。尽管部分州可能暂缓AI立法,但在保护儿童免受聊天机器人侵害、遏制耗能数据中心的公众压力下,更多州将加速推进立法。与此同时,由科技巨头和AI安全倡导者资助的超级政治行动委员会将斥资数千万美元,投入国会及州级选举,扶持各自阵营的立法者。

特朗普的行政令要求司法部成立专项工作组,起诉那些AI法规与其"轻触式监管"理念冲突的州,并指示商务部对制定"严苛"AI法律的州切断联邦宽带资金。康奈尔法学院教授詹姆斯·格里梅尔曼指出,该行政令可能主要针对民主党州的部分法律:"它将用于挑战少数涉及AI透明度和偏见的条款,这些往往是更偏向自由派的议题。"

目前许多州并未退缩。12月19日,纽约州长凯西·霍楚签署了具有里程碑意义的《负责任AI安全与教育法案》,要求AI公司公布确保模型安全开发的协议并报告重大安全事件。1月1日,加州推出了全美首部前沿AI安全法SB 53(纽约法案正是以其为蓝本),旨在防范生物武器或网络攻击等灾难性危害。尽管两部法律都在行业游说压力下有所弱化,但它们仍在科技巨头与AI安全倡导者间达成了脆弱而难得的妥协。

若特朗普政府挑战这些来之不易的法律,加州、纽约等民主党州很可能诉诸法庭。佛罗里达等拥有AI监管坚定支持者的共和党州也可能加入诉讼。科罗拉多大学法学院教授玛戈特·卡明斯基指出:"特朗普政府试图通过行政行动抢占立法先机的做法已力不从心,犹如行走在薄冰之上。"

但那些不愿引起联邦注意、或无法承受农村地区失去宽带资金的共和党州,可能会暂停AI立法或执法。无论诉讼胜负,这种混乱局面都可能抑制各州立法。矛盾的是,预算充足且因对抗联邦政府而士气高涨的民主党州,反而最不可能妥协。

特朗普承诺与国会制定联邦AI政策,但陷入僵局且两极分化的国会今年难以通过法案。7月参议院否决了税收法案中的州级AI法律暂停条款,11月众议院又撤回了国防法案中的类似提案。事实上,特朗普用行政令胁迫国会的做法,可能彻底扼杀两党合作的可能性。

前民主党众议员布拉德·卡森指出:"行政令强化了党派立场,使负责任AI政策的通过更加困难。它固化了民主党人的立场,也在共和党内部制造了深刻裂痕。"

虽然特朗普阵营中的AI加速主义者(如AI与加密货币主管大卫·萨克斯)主张放松监管,但像史蒂夫·班农这样的民粹主义者则警告超级智能失控和大规模失业的风险。作为对行政令的回应,共和党州总检察长联合签署跨党派信函,敦促联邦通信委员会不要凌驾于各州AI法律之上。

随着公众对AI危害心理健康、就业和环境的担忧加剧,监管呼声日益高涨。若国会持续瘫痪,各州将成为制约AI行业的唯一力量。据全美州议会会议数据,2025年各州提出了超1000项AI相关法案,近40个州通过了超100项法律。

保护儿童免受聊天机器人伤害的议题可能促成罕见共识。1月7日,谷歌与聊天机器人公司Character.AI就青少年自杀案件达成和解。次日,肯塔基州总检察长起诉该公司,指控其聊天机器人导致儿童自杀及自残。OpenAI和Meta也面临类似诉讼。今年此类案件预计将持续增加。在缺乏专门AI法律的情况下,产品责任法与言论自由原则如何适用于这些新型风险仍是未知数。格里梅尔曼坦言:"法庭将如何裁决仍是开放性问题。"

在诉讼酝酿期间,各州将推动儿童安全立法(这类法律不受特朗普禁令约束)。1月9日,OpenAI与曾经的对手——儿童安全倡导组织"常识媒体"达成合作,支持加州《家长与儿童AI安全法案》公投提案,为聊天机器人与儿童互动设置护栏。该提案要求AI公司验证用户年龄、提供家长监控功能并接受独立儿童安全审计。若获通过,可能成为各州监管聊天机器人的范本。

在数据中心引发广泛抵制的背景下,各州还将尝试监管AI运行所需的资源,包括要求数据中心报告水电消耗并自行承担电费的法案。若AI开始大规模取代工作岗位,劳工组织可能推动特定行业的AI禁令。部分关注AI灾难性风险的州或将出台类似SB 53和纽约法案的安全法规。

与此同时,科技巨头将继续动用雄厚资金打压AI监管。由OpenAI总裁格雷格·布罗克曼和风投公司安德森·霍洛维茨支持的超级政治行动委员会"引领未来",将竭力扶持支持AI自由发展的候选人进入国会和州议会。他们将效仿加密货币行业的策略,通过选举盟友来制定规则。为对抗这种力量,由卡森和犹他州前共和党众议员克里斯·斯图尔特运营的"公众优先"组织资助的超级政治行动委员会,将支持倡导AI监管的候选人。我们甚至可能看到以反AI民粹主义为纲领的候选人参与竞选。

2026年,美国民主制度缓慢而纷乱的进程仍将继续。各州首府制定的规则,不仅将决定这个时代最具颠覆性的技术在美国境内的发展轨迹,更将在未来数年影响其全球演进的方向。

深度解析
人工智能
2026年AI发展五大趋势
我们的AI专栏作家预测了明年值得关注的五个热点趋势。

当生物学家将大语言模型视为外星生命
通过将大语言模型当作生命体而非计算机程序来研究,科学家首次揭示了它们的部分奥秘。

基于监狱通话训练的AI模型正在预判犯罪
该模型专为检测"预谋犯罪"而构建。

保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。

英文来源:

America’s coming war over AI regulation
In 2026, states will go head to head with the White House’s sweeping executive order.
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.
In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation.
Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says.
For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates.
If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.”
But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge.
In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal.
The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says.
While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws.
With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures.
Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann.
While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots.
Fueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act.
Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms.
In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.
Deep Dive
Artificial intelligence
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
An AI model trained on prison phone calls now looks for planned crimes in those calls
The model is built to detect when crimes are being “contemplated.”
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读