«

下一个法律前沿领域将涉及人脸与人工智能。

qimuai 发布于 阅读:5 一手编译


下一个法律前沿领域将涉及人脸与人工智能。

内容来源:https://www.theverge.com/column/805821/the-next-legal-frontier-is-your-face-and-ai

内容总结:

【AI肖像权之争:法律空白下的数字身份危机】

当一首模仿歌手Drake的AI生成歌曲《Heart on My Sleeve》在2023年横空出世,科技与法律的碰撞已悄然拉开序幕。这首足以以假乱真的作品虽因版权问题被下架,却将公众视线引向了更复杂的领域——数字肖像权保护。

法律滞后与科技狂奔
目前美国尚未出台联邦层面的数字肖像权法案,各州零散的法律条文难以应对AI技术冲击。尽管田纳西州和加州于2024年率先加强娱乐从业者肖像保护,但法律追赶技术的步伐始终迟缓。今年OpenAI发布视频生成平台Sora后,未经授权的名人深度伪造视频激增,迫使企业自行制定使用规范,这些准则正逐渐成为行业事实标准。

监管困境与伦理争议
Sora平台虽宣称设置防护栏,却接连引发争议:最初对历史人物肖像使用几无限制,直至马丁·路德·金遗产管理委员会抗议其生成内容涉嫌侮辱;虽禁止未经授权使用在世者肖像,用户仍能绕过限制让布莱恩·克兰斯顿与迈克尔·杰克逊在虚拟场景中同框。更令人担忧的是,即便获得肖像使用许可的参与者,也对其形象被用于生成涉及特殊癖好或争议性内容表示不安。

立法博弈与平台自救
美国演员工会SAG-AFTRA正推动《禁止伪造法案》(NO FAKES Act),拟确立对数字复制品的全国性监管体系。该法案虽获YouTube等平台支持,却遭电子前线基金会等组织强烈反对,认为其过度审查将损害网络言论自由。与此同时,YouTube已率先允许创作者检索并移除未经授权的肖像使用内容,此举被视为行业自我监管的重要尝试。

隐忧与挑战
在立法推进缓慢的背景下,三大隐患日益凸显:

  1. 深度伪造技术正被广泛运用于政治攻击,前总统特朗普等政治人物已开始使用AI生成内容抹黑对手
  2. 社交平台依据《通信规范法》第230条享有的责任豁免权,可能因主动提供生成工具而受到挑战
  3. 研究显示绝大多数深度伪造内容仍集中于非自愿女性色情影像,相关法律追责机制亟待完善

随着技术门槛持续降低,社会正在重塑数字时代的肖像使用伦理边界。当任何人都能轻易生成逼真的虚拟影像时,我们是否已准备好应对随之而来的身份危机?这个问题的答案,将决定虚拟与现实的最后防线。

中文翻译:

欢迎订阅《科技回望》——这是一份每周为您解析科技界核心事件的通讯简报。若想深入了解人工智能领域的法律困局,请关注记者Adi Robertson的报道。订阅用户将于美东时间上午8点收到本期内容,点击此处即可免费订阅。

人脸与AI:法律监管的新边疆
当前局势正变得愈发诡谲。

事件起源
当那首仿造德雷克声线的AI生成歌曲《Heart on My Sleeve》问世时,若是不明就里,你或许会以为听到了这位巨星的新作。而知晓内情者,则意识到这标志着新一轮法律与文化博弈的开启——关于AI服务使用人类面容与声音的边界,以及平台方应当如何应对。

早在2023年,这类AI伪造歌曲尚属新鲜事物,但其引发的隐患已清晰可见。对知名艺人声音的高度模仿令音乐界震动,流媒体平台最终以版权法技术性条款下架该作品。但创作者并未直接复制任何现有作品,只是实现了极致模仿。这使得焦点迅速转向肖像权这一独立法律领域——这个曾专属于明星追查盗用代言和恶搞诉讼的范畴,随着音视频深度伪造内容的泛滥,俨然成为当下最可行的监管工具之一。

与受《数字千年版权法》及多项国际条约约束的著作权不同,美国至今没有统一的联邦肖像权法案。各州零散的法律条款最初都未考虑AI技术的影响。不过近年立法进程明显加速:2024年,拥有庞大娱乐产业的田纳西州与加州分别由州长签署法案,加强了对演艺人员形象未经授权复制的保护。

但法律的演进终究追不上技术脚步。上月OpenAI推出专门用于复制与混真人形象的视频生成平台Sora,瞬间引爆了海量逼真得令人心惊的深度伪造视频,其中多数并未获得当事人许可。在立法真空的当下,OpenAI等企业自行制定的肖像使用规范,很可能成为互联网事实上的新准则。

现状观察
OpenAI否认轻率推出Sora,其CEO萨姆·奥尔特曼甚至声称现有防护措施“限制过度”。但该服务仍引发大量争议:初始版本对历史人物形象几乎未设限制,直到马丁·路德·金遗产管理方抗议平台出现这位民权领袖散布种族主义、实施犯罪的“侮辱性画面”后才紧急调整;虽明令禁止未经授权使用在世者肖像,用户仍能通过漏洞让布莱恩·科兰斯顿等明星出现在与迈克尔·杰克逊自拍的伪造视频中,促使美国演员工会介入并迫使平台强化防护机制。

更令人不安的是,部分授权使用自己数字形象(平台称为“客串”)的当事人也对成果感到不适——尤其是女性用户遭遇各种 fetish 向内容篡改。奥尔特曼承认此前未预料到人们会对授权形象产生“矛盾心理”,例如不愿让自己的数字分身“发表冒犯性言论”。

尽管Sora通过政策调整持续修补漏洞,但AI视频领域已是乱象丛生。前总统特朗普团队与其他政客将AI伪造内容作为常规武器:特朗普用一段向自由派网红哈里·西森形象者倾倒粪便的视频回应抗议活动;纽约市长候选人安德鲁·科莫则发布(又火速删除)了显示民主党对手佐兰·马姆达尼狼吞米饭的“罪犯宣传片”。据《Spitfire新闻》本月初报道,AI视频正成为网红骂战的新弹药。

未经授权视频随时可能引发法律诉讼——斯嘉丽·约翰逊等明星已就肖像盗用组建律师团队。但与AI著作权侵权案(已引发多起重磅诉讼及监管机构持续审议)不同,肖像权纠纷鲜少升级至对簿公堂,部分原因在于相关法律框架尚不成熟。

未来走向
美国演员工会在肯定OpenAI完善防护措施时,顺势推广已酝酿数年的《保护原创、培育艺术与守护娱乐安全法案》(简称NO FAKES法案)。该法案获YouTube等平台支持,旨在确立对在世或离世者声音、肖像的“计算机生成超写实电子再现”全国性管控权,并规定明知故纵的在线服务商需承担连带责任。

但该法案遭到网络自由表达组织的猛烈抨击。电子前线基金会斥其构建“新型审查架构”,强制平台进行过度宽泛的内容过滤,必然导致误删频发,形成“捣乱者否决”效应。尽管法案为戏仿、讽刺与评论作品设定了免责条款,但该组织警告:“对无力承担诉讼成本者,这些条款形同虚设。”

反对者或许可从国会近年极低的立法通过率中获得安慰——联邦政府正经历史上第二长停摆期,另有试图使州级AI监管失效的动议同步推进。但现实是,肖像规则仍在落地:本周YouTube宣布允许合作伙伴计划创作者检索并要求下架盗用其形象的内容,这延伸了现有政策中允许音乐行业合作伙伴删除“模仿艺人独特唱腔或说唱声线”内容的条款。

与此同时,社会规范仍在重塑过程中。当技术能轻易生成任何人从事任何行为的视频时,我们该如何界定伦理底线?多数场景下的行为准则,至今仍无定论。

延伸阅读
• 郑莎拉(Sarah Jeong)2024年关于图像无缝篡改的警示,在当下更具现实意义
• 《纽约时报》对特朗普痴迷AI生成内容的深度剖析
• 马克斯·里德(Max Read)将Sora作为社交平台的价值研判

英文来源:

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the legal morass of AI, follow Adi Robertson. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
The next legal frontier is your face and AI
Things are getting weird.
How it started
The song was called “Heart on My Sleeve,” and if you didn’t know better, you might guess you were hearing Drake. If you did know better, you were hearing the starting bell of a new legal and cultural battle: the fight over how AI services should be able to use people’s faces and voices, and how platforms should respond.
Back in 2023, the AI-generated faux-Drake track “Heart on My Sleeve” was a novelty; even so, the problems it presented were clear. The song’s close imitation of a major artist rattled musicians. Streaming services removed it on a copyright legal technicality. But the creator wasn’t making a direct copy of anything — just a very close imitation. So attention quickly turned to the separate area of likeness law. It’s a field that was once synonymous with celebrities going after unauthorized endorsements and parodies, and as audio and video deepfakes proliferated, it felt like one of the few tools available to regulate them.
Unlike copyright, which is governed by the Digital Millennium Copyright Act and multiple international treaties, there’s no federal law around likeness. It’s a patchwork of varying state laws, none of which were originally designed with AI in mind. But the past few years have seen a flurry of efforts to change that. In 2024, Tennessee Gov. Bill Lee and California Gov. Gavin Newsom — both of whose states rely heavily on their media industries — signed bills that expanded protections against unauthorized replicas of entertainers.
But law has predictably moved more slowly than tech. Last month OpenAI launched Sora, an AI video generation platform aimed specifically at capturing and remixing real people’s likenesses. It opened the floodgates to a torrent of often startlingly realistic deepfakes, including of people who didn’t consent to their creation. OpenAI and other companies are responding by implementing their own likeness policies — which, in the absence of anything else, could turn into the internet’s new rules of the road.
How it’s going
OpenAI has denied it was reckless launching Sora, with CEO Sam Altman claiming that if anything, it was “way too restrictive” with guardrails. Yet the service has still generated plenty of complaints. It launched with minimal restrictions on the likenesses of historical figures, only to reverse course after Martin Luther King Jr.’s estate complained about “disrespectful depictions” of the assassinated civil rights leader spewing racism or committing crimes. It touted careful restrictions on unauthorized use of living people’s likenesses, but users found ways around it to put celebrities like Bryan Cranston into Sora videos doing things like taking a selfie with Michael Jackson, leading to complaints from SAG-AFTRA that pushed OpenAI to strengthen guardrails in unspecified ways there too.
Even some people who did authorize Sora cameos (its word for a video using a person’s likeness) were unsettled by the results, including, for women, all kinds of fetish output. Altman said he hadn’t realized people might have “in-between” feelings about authorized likenesses, like not wanting a public cameo “to say offensive things or things that they find deeply problematic.”
Sora’s been addressing problems with changes like its tweak to the historical figures policy, but it’s not the only AI video service, and things are getting — in general — very weird. AI slop has become de rigueur for President Donald Trump’s administration and some other politicians, including gross or outright racist depictions of specific political enemies: Trump responded to last week’s No Kings protests with a video that showed him dropping shit on a person who resembled liberal influencer Harry Sisson, while New York City mayoral candidate Andrew Cuomo posted (and quickly deleted) a “criminals for Zohran Mamdani” video that showed his Democratic opponent gobbling handfuls of rice. As Kat Tenbarge chronicled in Spitfire News earlier this month, AI videos are becoming ammunition in influencer drama as well.
There’s an almost constant potential threat of legal action around unauthorized videos, as celebrities like Scarlett Johansson have lawyered up over use of their likeness. But unlike with AI copyright infringement allegations, which have generated numerous high-profile lawsuits and nearly constant deliberation inside regulatory agencies, few likeness incidents have escalated to that level — perhaps in part because the legal landscape is still in flux.
What happens next
When SAG-AFTRA thanked OpenAI for changing Sora’s guardrails, it used the opportunity to promote the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, a years-old attempt to codify protections against “unauthorized digital replicas.” The NO FAKES Act, which has also garnered support from YouTube, introduces nationwide rights to control the use of a “computer-generated, highly realistic electronic representation” of a living or dead person’s voice or visual likeness. It includes liability for online services that knowingly allow unauthorized digital replicas, too.
The NO FAKES Act has generated severe criticism from online free speech groups. The EFF dubbed it a “new censorship infrastructure” mandate that forces platforms to filter content so broadly it will almost inevitably lead to unintentional takedowns and a “heckler’s veto” online. The bill includes carveouts for parody, satire, and commentary that should be allowed even without authorization, but they’ll be “cold comfort for those who cannot afford to litigate the question,” the organization warned.
Opponents of the NO FAKES Act can take solace in how little legislation Congress manages to pass these days — we’re currently living through the second-longest federal government shutdown in history, and there’s even a separate push to block state AI regulation that could nullify new likeness laws. But pragmatically, likeness rules are still coming. Earlier this week YouTube announced it will let Partner Program creators search for unauthorized uploads using their likeness and request their removal. The move expands on existing policies that, among other things, let music industry partners take down content that “mimics an artist’s unique singing or rapping voice.”
And throughout all this, social norms are still evolving. We’re entering a world where you can easily generate a video of almost anyone doing almost anything — but when should you? In many cases, those expectations remain up for grabs.
By the way:

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读