«

家长呼吁纽约州长签署具有里程碑意义的人工智能安全法案。

qimuai 发布于 阅读:50 一手编译


家长呼吁纽约州长签署具有里程碑意义的人工智能安全法案。

内容来源:https://www.theverge.com/ai-artificial-intelligence/844062/parents-call-for-new-york-governor-to-sign-landmark-ai-safety-bill

内容总结:

逾150位家长联名致信纽约州长 呼吁签署AI安全法案

当地时间周五,超过150位家长联名致信纽约州州长凯西·霍楚,敦促其不加修改地签署《负责任人工智能安全与教育法案》。该法案要求Meta、OpenAI、谷歌等大型AI模型开发商制定安全计划,并遵守安全事件报告的透明度规则。家长们称其为应成为标准的“最低限度护栏”。

这项法案已于今年6月在纽约州参众两院获得通过。但据报道,霍楚州长本周提出了一项近乎彻底重写的修订方案,拟使法案更有利于科技公司,此举与加州此前在大型AI公司介入后修改相关法案的情况类似。

以Meta、IBM、英特尔等为代表的“AI联盟”已于6月致信纽约州立法者,表达对该法案“深切的担忧”,称其“难以实施”。由Perplexity AI、安德森·霍洛维茨基金等支持的人工智能超级政治行动委员会“引领未来”,近期也针对法案联合发起人、州议员亚历克斯·博雷斯发起广告攻势。

此次联名信由“家长联合行动”和“科技监督项目”组织。信中指出,部分签署人“因AI聊天机器人及社交媒体的危害失去了孩子”。签署者强调,当前版本的法案仅针对年耗资数亿美元的最大型公司进行规制,要求其向总检察长披露大规模安全事件并公布安全计划。法案还禁止开发商在可能造成“临界伤害”不合理风险时发布前沿模型,相关伤害定义为导致100人以上死亡或重伤,或因制造生化、放射性或核武器造成10亿美元以上财产损失,或AI模型在“无有效人工干预”下实施特定犯罪行为。

联名信写道:“大型科技公司斥巨资反对这些基本保护措施的做法似曾相识,我们此前已目睹过这种规避责任的模式。自大型科技公司决定在缺乏透明度、监督或责任的情况下推广算法社交媒体平台以来,对年轻人心理健康、情绪稳定及学习能力造成的广泛损害已被大量记录。”

中文翻译:

超过150名家长于周五联名致信纽约州州长凯西·霍楚,敦促她签署《负责任人工智能安全与教育法案》(RAISE法案)且不作任何修改。这项备受关注的法案将要求Meta、OpenAI、深度求索、谷歌等大型人工智能模型的开发者制定安全计划,并遵守报告安全事故的透明度规则。

家长呼吁纽约州州长签署具有里程碑意义的人工智能安全法案

家长们称该法案是"最低限度的防护栏",应成为行业标准。

该法案已于6月在纽约州参议院和众议院获得通过。但据报道,霍楚本周提议对RAISE法案进行近乎全面的重写,使其更有利于科技公司,类似于大型人工智能公司介入后对加州SB 53法案所做的某些修改。

不出所料,许多人工智能公司坚决反对这项立法。由Meta、IBM、英特尔、甲骨文、Snowflake、Uber、AMD、Databricks和Hugging Face等企业组成的"人工智能联盟",于6月致信纽约州立法者,详细阐述了对RAISE法案的"深切担忧",称其"难以实施"。而由Perplexity AI、安德森·霍洛维茨基金、OpenAI总裁格雷格·布罗克曼以及Palantir联合创始人乔·朗斯代尔支持的人工智能超级政治行动委员会"引领未来",近期正通过广告针对RAISE法案联合提案人——纽约州众议员亚历克斯·博雷斯进行抨击。

"共同家长行动"和"科技监督计划"两家组织共同起草了周五致霍楚的信件。信中指出,部分签署人"曾因人工智能聊天机器人及社交媒体的危害痛失子女"。签署者将当前版本的RAISE法案称为应立为法律的"最低限度防护栏"。

他们还强调,纽约州立法机构通过的法案"并非监管所有人工智能开发者——仅针对每年耗资数亿美元的超大型企业"。这些企业需向总检察长披露大规模安全事故并公布安全计划。同时,法案禁止开发者在"可能造成重大伤害的不合理风险"时发布前沿模型——所谓"重大伤害"被定义为导致100人及以上死亡或重伤,或因制造化学、生物、放射性与核武器造成10亿美元及以上经济损失;或指人工智能模型"在无实质性人工干预情况下实施,且若由人类实施将构成特定犯罪"的行为。

信中写道:"科技巨头斥巨资反对这些基本保护措施的做法似曾相识,我们早已见识过这种规避责任的套路。自大型科技公司决定在缺乏透明度、监管和责任机制的情况下推广算法驱动的社交媒体平台以来,对青少年心理健康、情绪稳定性和在校学习能力造成的广泛伤害已有充分记录。"

英文来源:

A group of more than 150 parents sent a letter on Friday to New York governor Kathy Hochul, urging her to sign the Responsible AI Safety and Education (RAISE) Act without changes. The RAISE Act is a buzzy bill that would require developers of large AI models — like Meta, OpenAI, Deepseek, and Google — to create safety plans and follow transparency rules about reporting safety incidents.
Parents call for New York governor to sign landmark AI safety bill
They called it “minimalist guardrails” that should set a standard.
They called it “minimalist guardrails” that should set a standard.
The bill passed in both the New York State Senate and the Assembly in June. But this week, Hochul reportedly proposed a near-total rewrite of the RAISE Act that would make it more favorable to tech companies, akin to some of the changes made to California’s SB 53 after large AI companies weighed in on it.
Many AI companies, unsurprisingly, are squarely against the legislation. The AI Alliance, which counts
Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, sent a letter in June to New York lawmakers detailing their “deep concern” about the RAISE Act, calling it “unworkable.” And Leading the Future, the pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale, has been targeting New York State Assemblymember Alex Bores, who co-sponsored the RAISE Act, with recent ads.
Two organizations, ParentsTogether Action and the Tech Oversight Project, put together Friday’s letter to Hochul, which states that some of the signees had “lost children to the harms of AI chatbots and social media.” The signatories called the RAISE Act as it stands now “minimalist guardrails” that should be made law.
They also highlighted that the bill, as passed by the New York State Legislature, “does not regulate all AI developers – only the very largest companies, the ones spending hundreds of millions of dollars a year.” They would be required to disclose large-scale safety incidents to the attorney general and publish safety plans. The developers would also be prohibited from releasing a frontier model “if doing so would create an unreasonable risk of critical harm,” which is defined as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon; or an AI model that “acts with no meaningful human intervention” and “would, if committed by a human,” fall under certain crimes.
“Big Tech’s deep-pocketed opposition to these basic protections looks familiar because we have
seen this pattern of avoidance and evasion before,” the letter states. “Widespread damage to young people —
including to their mental health, emotional stability, and ability to function in school — has been
widely documented ever since the biggest technology companies decided to push algorithmic
social media platforms without transparency, oversight, or responsibility.”

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读